Скачать презентацию Condor Tutorial GGF-5 HPDC-11 July 2002 John Скачать презентацию Condor Tutorial GGF-5 HPDC-11 July 2002 John

64b0cd0e04e25dc598d71e35cc6410b5.ppt

  • Количество слайдов: 190

Condor Tutorial GGF-5 / HPDC-11 July 2002 John Bent and Douglas Thain Computer Sciences Condor Tutorial GGF-5 / HPDC-11 July 2002 John Bent and Douglas Thain Computer Sciences Department University of Wisconsin-Madison [email protected] wisc. edu, [email protected] wisc. edu http: //www. cs. wisc. edu/condor

Outline › Session One - Doug h About Condor (17 slides) h Frieda the Outline › Session One - Doug h About Condor (17 slides) h Frieda the Scientist (26 slides) › Session Two – John h Managing Jobs (25 slides) h Sharing Resources (30 slides) › Session Three – Doug h Expanding to the Grid (36 slides) h Case Study: DTF (17 slides) › Session Four - John h Research Directions (38 slides) h Wrap-Up and Discussion http: //www. cs. wisc. edu/condor 2

About Condor › › › What does Condor do? What is Condor good for? About Condor › › › What does Condor do? What is Condor good for? What kind of results can I expect? http: //www. cs. wisc. edu/condor 3

The Condor Project (Established ‘ 85) Distributed High Throughput Computing research performed by a The Condor Project (Established ‘ 85) Distributed High Throughput Computing research performed by a team of ~25 faculty, full time staff and students who: hface software engineering challenges in a distributed UNIX/Linux/NT environment, hare involved in national and international collaborations, hactively interact with academic and commercial users, hmaintain and support a large distributed production environment, hand educate and train students. Funding – US Govt. (Do. D, Do. E, NASA, NSF), AT&T, IBM, INTEL, Microsoft UW-Madison http: //www. cs. wisc. edu/condor 4

What is High-Throughput Computing? › High-performance: CPU cycles/second under ideal circumstances. h“How fast can What is High-Throughput Computing? › High-performance: CPU cycles/second under ideal circumstances. h“How fast can I run simulation X on this machine? ” › High-throughput: CPU cycles/day (week, month, year? ) under non-ideal circumstances. h“How many times can I run simulation X in the next month using all available machines? ” http: //www. cs. wisc. edu/condor 5

What is Condor? › Condor converts collections of › distributively owned workstations and dedicated What is Condor? › Condor converts collections of › distributively owned workstations and dedicated clusters into a distributed high-throughput computing facility. Condor uses Class. Ad Matchmaking to make sure that everyone is happy. http: //www. cs. wisc. edu/condor 6

The Condor System › Unix and NT › Operational since 1986 › Manages more The Condor System › Unix and NT › Operational since 1986 › Manages more than 1300 CPUs at UW › › -Madison Software available free on the web More than 150 Condor installations worldwide in academia and industry http: //www. cs. wisc. edu/condor 7

Some HTC Challenges › Condor does whatever it takes to run your jobs, even Some HTC Challenges › Condor does whatever it takes to run your jobs, even if some machines… h. Crash (or are disconnected) h. Run out of disk space h. Don’t have your software installed h. Are frequently needed by others h. Are far away & managed by someone else http: //www. cs. wisc. edu/condor 8

What is Class. Ad Matchmaking? › Condor uses Class. Ad Matchmaking to make › What is Class. Ad Matchmaking? › Condor uses Class. Ad Matchmaking to make › sure that work gets done within the constraints of both users and owners. Users (jobs) have constraints: h“I need an Alpha with 256 MB RAM” › Owners (machines) have constraints: h“Only run jobs when I am away from my desk and never run jobs owned by Bob. ” http: //www. cs. wisc. edu/condor 9

Upgrade to Condor-G A Grid-enabled version of Condor that provides robust job management for Upgrade to Condor-G A Grid-enabled version of Condor that provides robust job management for Globus. h. Robust replacement for globusrun h. Provides extensive fault-tolerance h. Brings Condor’s job management features to Globus jobs http: //www. cs. wisc. edu/condor 10

What Have We Done on the Grid Already? › Example: NUG 30 hquadratic assignment What Have We Done on the Grid Already? › Example: NUG 30 hquadratic assignment problem h 30 facilities, 30 locations • minimize cost of transferring materials between them hposed in 1968 as challenge, long unsolved hbut with a good pruning algorithm & high -throughput computing. . . http: //www. cs. wisc. edu/condor 11

NUG 30 Solved on the Grid with Condor + Globus Resource simultaneously utilized: › NUG 30 Solved on the Grid with Condor + Globus Resource simultaneously utilized: › › › › › the Origin 2000 (through LSF ) at NCSA. the Chiba City Linux cluster at Argonne the SGI Origin 2000 at Argonne. the main Condor pool at Wisconsin (600 processors) the Condor pool at Georgia Tech (190 Linux boxes) the Condor pool at UNM (40 processors) the Condor pool at Columbia (16 processors) the Condor pool at Northwestern (12 processors) the Condor pool at NCSA (65 processors) the Condor pool at INFN (200 processors) http: //www. cs. wisc. edu/condor 12

NUG 30 - Solved!!! Sender: goux@dantec. ece. nwu. edu Subject: Re: Let the festivities NUG 30 - Solved!!! Sender: [email protected] ece. nwu. edu Subject: Re: Let the festivities begin. Hi dear Condor Team, you all have been amazing. NUG 30 required Condor Time. In just seven days ! 10. 9 years of More stats tomorrow !!! We are off celebrating ! condor rules ! cheers, JP. http: //www. cs. wisc. edu/condor 13

The Idea Computing power is everywhere, we try to make it usable by anyone. The Idea Computing power is everywhere, we try to make it usable by anyone. http: //www. cs. wisc. edu/condor 14

Outline › About Condor › Frieda the Scientist › › › Managing Jobs Sharing Outline › About Condor › Frieda the Scientist › › › Managing Jobs Sharing Resources Expanding to the Grid Case Study: DTF Research Directions http: //www. cs. wisc. edu/condor 15

Meet Frieda. She is a scientist. But she has a big problem. http: //www. Meet Frieda. She is a scientist. But she has a big problem. http: //www. cs. wisc. edu/condor 16

Frieda’s Application … Simulate the behavior of F(x, y, z) for 20 values of Frieda’s Application … Simulate the behavior of F(x, y, z) for 20 values of x, 10 values of y and 3 values of z (20*10*3 = 600 combinations) h. F takes on the average 3 hours to compute on a “typical” workstation (total = 1800 hours) h. F requires a “moderate” (128 MB) amount of memory h. F performs “moderate” I/O - (x, y, z) is 5 MB and F(x, y, z) is 50 MB http: //www. cs. wisc. edu/condor 17

I have 600 simulations to run. Where can I get help? http: //www. cs. I have 600 simulations to run. Where can I get help? http: //www. cs. wisc. edu/condor 18

Norim the Genie: “Install a Personal Condor!” 19 Norim the Genie: “Install a Personal Condor!” 19

Installing Condor › Download Condor for your operating system › Available as a free Installing Condor › Download Condor for your operating system › Available as a free download from › http: //www. cs. wisc. edu/condor Stable –vs- Developer Releases h. Naming scheme similar to the Linux Kernel… › Available for most Unix platforms and Windows NT http: //www. cs. wisc. edu/condor 20

So Frieda Installs Personal Condor on her machine… › What do we mean by So Frieda Installs Personal Condor on her machine… › What do we mean by a “Personal” Condor? h. Condor on your own workstation, no root access required, no system administrator intervention needed › So after installation, Frieda submits her jobs to her Personal Condor… http: //www. cs. wisc. edu/condor 21

600 Condor jobs personal your workstation Condor http: //www. cs. wisc. edu/condor 22 600 Condor jobs personal your workstation Condor http: //www. cs. wisc. edu/condor 22

Personal Condor? ! What’s the benefit of a Condor “Pool” with just one user Personal Condor? ! What’s the benefit of a Condor “Pool” with just one user and one machine? http: //www. cs. wisc. edu/condor 23

Your Personal Condor will. . . › … keep an eye on your jobs Your Personal Condor will. . . › … keep an eye on your jobs and will keep you › › posted on their progress … implement your policy on the execution order of the jobs … keep a log of your job activities … add fault tolerance to your jobs … implement your policy on when the jobs can run on your workstation http: //www. cs. wisc. edu/condor 24

Getting Started: Submitting Jobs to Condor › Choosing a “Universe” for your job h. Getting Started: Submitting Jobs to Condor › Choosing a “Universe” for your job h. Just use VANILLA for now › Make your job “batch-ready” › Creating a submit description file › Run condor_submit on your submit description file http: //www. cs. wisc. edu/condor 25

Making your job batch-ready › Must be able to run in the background: › Making your job batch-ready › Must be able to run in the background: › › no interactive input, windows, GUI, etc. Can still use STDIN, STDOUT, and STDERR (the keyboard and the screen), but files are used for these instead of the actual devices Organize data files http: //www. cs. wisc. edu/condor 26

Creating a Submit Description File › A plain ASCII text file › Tells Condor Creating a Submit Description File › A plain ASCII text file › Tells Condor about your job: h. Which executable, universe, input, output and error files to use, command-line arguments, environment variables, any special requirements or preferences (more on this later) › Can describe many jobs at once (a “cluster”) each with different input, arguments, output, etc. http: //www. cs. wisc. edu/condor 27

Simple Submit Description File # Simple condor_submit input file # (Lines beginning with # Simple Submit Description File # Simple condor_submit input file # (Lines beginning with # are comments) # NOTE: the words on the left side are not # case sensitive, but filenames are! Universe = vanilla Executable = my_job Queue http: //www. cs. wisc. edu/condor 28

Running condor_submit › You give condor_submit the name of the › › submit file Running condor_submit › You give condor_submit the name of the › › submit file you have created condor_submit parses the file, checks for errors, and creates a “Class. Ad” that describes your job(s) Sends your job’s Class. Ad(s) and executable to the condor_schedd, which stores the job in its queue h. Atomic operation, two-phase commit › View the queue with condor_q http: //www. cs. wisc. edu/condor 29

Running condor_submit % condor_submit my_job. submit-file Submitting job(s). 1 job(s) submitted to cluster 1. Running condor_submit % condor_submit my_job. submit-file Submitting job(s). 1 job(s) submitted to cluster 1. % condor_q -- Submitter: perdita. cs. wisc. edu : <128. 105. 165. 34: 1027> : ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 1. 0 frieda 6/16 06: 52 0+00: 00 I 0 0. 0 my_job 1 jobs; 1 idle, 0 running, 0 held % http: //www. cs. wisc. edu/condor 30

Another Submit Description File # Example condor_submit input file # (Lines beginning with # Another Submit Description File # Example condor_submit input file # (Lines beginning with # are comments) # NOTE: the words on the left side are not # case sensitive, but filenames are! Universe = vanilla Executable = /home/wright/condor/my_job. condor Input = my_job. stdin Output = my_job. stdout Error = my_job. stderr Arguments = -arg 1 -arg 2 Initial. Dir = /home/wright/condor/run_1 Queue http: //www. cs. wisc. edu/condor 31

“Clusters” and “Processes” › If your submit file describes multiple jobs, › › we “Clusters” and “Processes” › If your submit file describes multiple jobs, › › we call this a “cluster” Each job within a cluster is called a “process” or “proc” If you only specify one job, you still get a cluster, but it has only one process A Condor “Job ID” is the cluster number, a period, and the process number (“ 23. 5”) Process numbers always start at 0 http: //www. cs. wisc. edu/condor 32

Example Submit Description File for a Cluster # Example condor_submit input file that defines Example Submit Description File for a Cluster # Example condor_submit input file that defines # a cluster of two jobs with different iwd Universe = vanilla Executable = my_job Arguments = -arg 1 -arg 2 Initial. Dir = run_0 Queue Becomes job 2. 0 Initial. Dir = run_1 Queue Becomes job 2. 1 http: //www. cs. wisc. edu/condor 33

% condor_submit my_job. submit-file Submitting job(s). 2 job(s) submitted to cluster 2. % condor_q % condor_submit my_job. submit-file Submitting job(s). 2 job(s) submitted to cluster 2. % condor_q -- Submitter: perdita. cs. wisc. edu : <128. 105. 165. 34: 1027> : ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 1. 0 frieda 6/16 06: 52 0+00: 02: 11 R 0 0. 0 my_job 2. 0 frieda 6/16 06: 56 0+00: 00 I 0 0. 0 my_job 2. 1 frieda 6/16 06: 56 0+00: 00 I 0 0. 0 my_job 3 jobs; 2 idle, 1 running, 0 held % http: //www. cs. wisc. edu/condor 34

Submit Description File for a BIG Cluster of Jobs › Specify initial directory for Submit Description File for a BIG Cluster of Jobs › Specify initial directory for each job is › › specified with the $(Process) macro, and instead of submitting a single job, we use “Queue 600” to submit 600 jobs at once $(Process) will be expanded to the process number for each job in the cluster (from 0 up to 599 in this case), so we’ll have “run_0”, “run_1”, … “run_599” directories All the input/output files will be in different directories! http: //www. cs. wisc. edu/condor 35

Submit Description File for a BIG Cluster of Jobs # Example condor_submit input file Submit Description File for a BIG Cluster of Jobs # Example condor_submit input file that defines # a cluster of 600 jobs with different iwd Universe = vanilla Executable = my_job Arguments = -arg 1 –arg 2 Initial. Dir = run_$(Process) Queue 600 http: //www. cs. wisc. edu/condor 36

Using condor_rm › If you want to remove a job from the › › Using condor_rm › If you want to remove a job from the › › Condor queue, you use condor_rm You can only remove jobs that you own (you can’t run condor_rm on someone else’s jobs unless you are root) You can give specific job ID’s (cluster or cluster. proc), or you can remove all of your jobs with the “-a” option. http: //www. cs. wisc. edu/condor 37

Temporarily halt a Job › Use condor_hold to place a job on hold h. Temporarily halt a Job › Use condor_hold to place a job on hold h. Kills job if currently running h. Will not attempt to restart job until released › Use condor_release to remove a hold and permit job to be scheduled again http: //www. cs. wisc. edu/condor 38

Using condor_history › Once your job completes, it will no longer › › show Using condor_history › Once your job completes, it will no longer › › show up in condor_q You can use condor_history to view information about a completed job The status field (“ST”) will have either a “C” for “completed”, or an “X” if the job was removed with condor_rm http: //www. cs. wisc. edu/condor 39

Getting Email from Condor › By default, Condor will send you email when your Getting Email from Condor › By default, Condor will send you email when your jobs completes h. With lots of information about the run › If you don’t want this email, put this in your submit file: notification = never › If you want email every time something happens to your job (preempt, exit, etc), use this: notification = always http: //www. cs. wisc. edu/condor 40

Getting Email from Condor (cont’d) › If you only want email in case of Getting Email from Condor (cont’d) › If you only want email in case of errors, use this: notification = error › By default, the email is sent to your account on the host you submitted from. If you want the email to go to a different address, use this: notify_user = [email protected] here http: //www. cs. wisc. edu/condor 41

Outline › About Condor › Frieda the Scientist › Managing Jobs › › Sharing Outline › About Condor › Frieda the Scientist › Managing Jobs › › Sharing Resources Expanding to the Grid Case Study: DTF Research Directions http: //www. cs. wisc. edu/condor 42

A Job’s life story: The “User Log” file › A User. Log must be A Job’s life story: The “User Log” file › A User. Log must be specified in your submit file: h. Log = filename › You get a log entry for everything that happens to your job: h. When it was submitted, when it starts executing, preempted, restarted, completes, if there any problems, etc. › Very useful! Highly recommended! http: //www. cs. wisc. edu/condor 43

Sample Condor User Log 000 (8135. 000) 05/25 19: 10: 03 Job submitted from Sample Condor User Log 000 (8135. 000) 05/25 19: 10: 03 Job submitted from host: <128. 105. 146. 14: 1816>. . . 001 (8135. 000) 05/25 19: 12: 17 Job executing on host: <128. 105. 165. 131: 1026>. . . 005 (8135. 000) 05/25 19: 13: 06 Job terminated. (1) Normal termination (return value 0) Usr 0 00: 37, Sys 0 00: 00 - Run Local Usage Usr 0 00: 37, Sys 0 00: 00 - Total Remote Usage Usr 0 00: 00, Sys 0 00: 05 - Run Remote Usage Usr 0 00: 00, Sys 0 00: 05 9624 - - Total Local Usage Run Bytes Sent By Job 7146159 - 9624 Total Bytes Sent By Job - 7146159 - Run Bytes Received By Job Total Bytes Received By Job . . . http: //www. cs. wisc. edu/condor 44

Uses for the User Log › Easily read by human or machine h. C++ Uses for the User Log › Easily read by human or machine h. C++ library and Perl Module for parsing User. Logs is available › Event triggers for meta-schedulers h. Like Dag. Man… › Visualizations of job progress h. Condor Job. Monitor Viewer http: //www. cs. wisc. edu/condor 45

Condor Job. Monitor Screenshot 46 Condor Job. Monitor Screenshot 46

Job Priorities w/ condor_prio › condor_prio allows you to specify the order in which Job Priorities w/ condor_prio › condor_prio allows you to specify the order in which your jobs are started Higher the prio #, the earlier the job will start › % condor_q -- Submitter: perdita. cs. wisc. edu : <128. 105. 165. 34: 1027> : ID 1. 0 OWNER SUBMITTED frieda 6/16 06: 52 RUN_TIME ST PRI SIZE CMD 0+00: 02: 11 R 0 0. 0 my_job % condor_prio +5 1. 0 % condor_q -- Submitter: perdita. cs. wisc. edu : <128. 105. 165. 34: 1027> : ID 1. 0 OWNER SUBMITTED frieda 6/16 06: 52 RUN_TIME ST PRI SIZE CMD 0+00: 02: 13 R 5 0. 0 my_job http: //www. cs. wisc. edu/condor 47

Want other Scheduling possibilities? Extend with the Scheduler Universe › In addition to VANILLA, Want other Scheduling possibilities? Extend with the Scheduler Universe › In addition to VANILLA, another job › › universe is the Scheduler Universe jobs run on the submitting machine and serve as a meta-scheduler. DAGMan meta-scheduler included http: //www. cs. wisc. edu/condor 48

DAGMan › Directed Acyclic Graph Manager › DAGMan allows you to specify the dependencies DAGMan › Directed Acyclic Graph Manager › DAGMan allows you to specify the dependencies between your Condor jobs, so it can manage them automatically for you. › (e. g. , “Don’t run job “B” until job “A” has completed successfully. ”) http: //www. cs. wisc. edu/condor 49

What is a DAG? › A DAG is the data structure used by DAGMan What is a DAG? › A DAG is the data structure used by DAGMan to represent these dependencies. › Each job is a “node” in the DAG. › Each node can have any number of “parent” or “children” nodes – as long as there are no loops! Job A Job B Job C Job D http: //www. cs. wisc. edu/condor 50

Defining a DAG › A DAG is defined by a. dag file, listing each Defining a DAG › A DAG is defined by a. dag file, listing each of its nodes and their dependencies: # diamond. dag Job A a. sub Job B b. sub Job C c. sub Job D d. sub Parent A Child B C Parent B C Child D Job A Job B Job C Job D › each node will run the Condor job specified by its accompanying Condor submit file http: //www. cs. wisc. edu/condor 51

Submitting a DAG › To start your DAG, just run condor_submit_dag with your. dag Submitting a DAG › To start your DAG, just run condor_submit_dag with your. dag file, and Condor will start a personal DAGMan daemon which to begin running your jobs: % condor_submit_dag diamond. dag › condor_submit_dag submits a Scheduler Universe › Job with DAGMan as the executable. Thus the DAGMan daemon itself runs as a Condor job, so you don’t have to baby-sit it. http: //www. cs. wisc. edu/condor 52

Running a DAG › DAGMan acts as a “meta-scheduler”, managing the submission of your Running a DAG › DAGMan acts as a “meta-scheduler”, managing the submission of your jobs to Condor based on the DAG dependencies. A Condor A Job Queue B C . dag File DAGMan D http: //www. cs. wisc. edu/condor 53

Running a DAG (cont’d) › DAGMan holds & submits jobs to the Condor queue Running a DAG (cont’d) › DAGMan holds & submits jobs to the Condor queue at the appropriate times. A Condor B Job C Queue B C DAGMan D http: //www. cs. wisc. edu/condor 54

Running a DAG (cont’d) › In case of a job failure, DAGMan continues until Running a DAG (cont’d) › In case of a job failure, DAGMan continues until it can no longer make progress, and then creates a “rescue” file with the current state of the DAG. A Condor Job Queue B X Rescue File DAGMan D http: //www. cs. wisc. edu/condor 55

Recovering a DAG › Once the failed job is ready to be re-run, the Recovering a DAG › Once the failed job is ready to be re-run, the rescue file can be used to restore the prior state of the DAG. A Condor Job C Queue B C Rescue File DAGMan D http: //www. cs. wisc. edu/condor 56

Recovering a DAG (cont’d) › Once that job completes, DAGMan will continue the DAG Recovering a DAG (cont’d) › Once that job completes, DAGMan will continue the DAG as if the failure never happened. A Condor Job D Queue B C DAGMan D http: //www. cs. wisc. edu/condor 57

Finishing a DAG › Once the DAG is complete, the DAGMan job itself is Finishing a DAG › Once the DAG is complete, the DAGMan job itself is finished, and exits. A Condor Job Queue B C DAGMan D http: //www. cs. wisc. edu/condor 58

Additional DAGMan Features › Provides other handy features for job management… hnodes can have Additional DAGMan Features › Provides other handy features for job management… hnodes can have PRE & POST scripts hfailed nodes can be automatically re- tried a configurable number of times hjob submission can be “throttled” http: //www. cs. wisc. edu/condor 59

We’ve seen how Condor will … keep an eye on your jobs and will We’ve seen how Condor will … keep an eye on your jobs and will keep you posted on their progress … implement your policy on the execution order of the jobs … keep a log of your job activities … add fault tolerance to your jobs ? http: //www. cs. wisc. edu/condor 60

What if each job needed to run for 20 days? What if I wanted What if each job needed to run for 20 days? What if I wanted to interrupt a job with a higher priority job? http: //www. cs. wisc. edu/condor 61

Condor’s Standard Universe to the rescue! › Condor can support various combinations of › Condor’s Standard Universe to the rescue! › Condor can support various combinations of › features/environments in different “Universes” Different Universes provide different functionality for your job: h. Vanilla – Run any Serial Job h. Scheduler – Plug in a meta-scheduler h. Standard – Support for transparent process checkpoint and restart http: //www. cs. wisc. edu/condor 62

Process Checkpointing › Condor’s Process Checkpointing mechanism saves all the state of a process Process Checkpointing › Condor’s Process Checkpointing mechanism saves all the state of a process into a checkpoint file h. Memory, CPU, I/O, etc. › The process can then be restarted from › right where it left off Typically no changes to your job’s source code needed – however, your job must be relinked with Condor’s Standard Universe support library http: //www. cs. wisc. edu/condor 63

Relinking Your Job for submission to the Standard Universe To do this, just place Relinking Your Job for submission to the Standard Universe To do this, just place “condor_compile” in front of the command you normally use to link your job: condor_compile gcc -o myjob. c OR condor_compile f 77 -o myjob filea. f fileb. f OR condor_compile make –f My. Makefile http: //www. cs. wisc. edu/condor 64

Limitations in the Standard Universe › Condor’s checkpointing is not at the kernel level. Limitations in the Standard Universe › Condor’s checkpointing is not at the kernel level. Thus in the Standard Universe the job may not h. Fork() h. Use kernel threads h. Use some forms of IPC, such as pipes and shared memory › Many typical scientific jobs are OK http: //www. cs. wisc. edu/condor 65

When will Condor checkpoint your job? › Periodically, if desired h. For fault tolerance When will Condor checkpoint your job? › Periodically, if desired h. For fault tolerance › To free the machine to do a higher priority task (higher priority job, or a job from a user with higher priority) h. Preemptive-resume scheduling › When you explicitly run condor_checkpoint, condor_vacate, condor_off or condor_restart command http: //www. cs. wisc. edu/condor 66

Outline › About Condor › Frieda the Scientist › Managing Jobs › Sharing Resources Outline › About Condor › Frieda the Scientist › Managing Jobs › Sharing Resources › Expanding to the Grid › Case Study: DTF › Research Directions http: //www. cs. wisc. edu/condor 67

What Condor Daemons are running on my machine, and what do they do? http: What Condor Daemons are running on my machine, and what do they do? http: //www. cs. wisc. edu/condor 68

Condor Daemon Layout = Process Spawned Personal Condor / Central Manager master startd schedd Condor Daemon Layout = Process Spawned Personal Condor / Central Manager master startd schedd negotiator collector http: //www. cs. wisc. edu/condor 69

condor_master › Starts up all other Condor daemons › If there any problems and condor_master › Starts up all other Condor daemons › If there any problems and a daemon › exits, it restarts the daemon and sends email to the administrator Checks the time stamps on the binaries of the other Condor daemons, and if new binaries appear, the master will gracefully shutdown the currently running version and start the new version http: //www. cs. wisc. edu/condor 70

condor_master (cont’d) › Acts as the server for many Condor remote administration commands: hcondor_reconfig, condor_master (cont’d) › Acts as the server for many Condor remote administration commands: hcondor_reconfig, condor_restart, condor_off, condor_on, condor_config_val, etc. http: //www. cs. wisc. edu/condor 71

condor_startd › Represents a machine to the Condor › › system Responsible for starting, condor_startd › Represents a machine to the Condor › › system Responsible for starting, suspending, and stopping jobs Enforces the wishes of the machine owner (the owner’s “policy”… more on this soon) http: //www. cs. wisc. edu/condor 72

condor_schedd › Represents users to the Condor system › Maintains the persistent queue of condor_schedd › Represents users to the Condor system › Maintains the persistent queue of jobs › Responsible for contacting available › machines and sending them jobs Services user commands which manipulate the job queue: hcondor_submit, condor_rm, condor_q, condor_hold, condor_release, condor_prio, … http: //www. cs. wisc. edu/condor 73

condor_collector › Collects information from all other Condor daemons in the pool h“Directory Service” condor_collector › Collects information from all other Condor daemons in the pool h“Directory Service” / Database for a Condor pool › Each daemon sends a periodic update called › a “Class. Ad” to the collector Services queries for information: h. Queries from other Condor daemons h. Queries from users (condor_status) http: //www. cs. wisc. edu/condor 74

condor_negotiator › Performs “matchmaking” in Condor › Gets information from the collector about › condor_negotiator › Performs “matchmaking” in Condor › Gets information from the collector about › › all available machines and all idle jobs Tries to match jobs with machines that will serve them Both the job and the machine must satisfy each other’s requirements http: //www. cs. wisc. edu/condor 75

Happy Day! Frieda’s organization purchased a Beowulf Cluster! › Frieda Installs Condor on › Happy Day! Frieda’s organization purchased a Beowulf Cluster! › Frieda Installs Condor on › all the dedicated Cluster nodes, and configures them with her machine as the central manager… Now her Condor Pool can run multiple jobs at once http: //www. cs. wisc. edu/condor 76

600 Condor jobs personal your Condor Pool workstation Condor http: //www. cs. wisc. edu/condor 600 Condor jobs personal your Condor Pool workstation Condor http: //www. cs. wisc. edu/condor 77

Layout of the Condor Pool = Process Spawned = Class. Ad Communication Pathway Central Layout of the Condor Pool = Process Spawned = Class. Ad Communication Pathway Central Manager (Frieda’s) master startd schedd negotiator collector Cluster Node master startd http: //www. cs. wisc. edu/condor 78

condor_status % condor_status Name Op. Sys Arch State Activity Load. Av Mem Actvty. Time condor_status % condor_status Name Op. Sys Arch State Activity Load. Av Mem Actvty. Time haha. cs. wisc. IRIX 65 SGI Unclaimed Idle 0. 198 192 0+00: 04 antipholus. cs LINUX INTEL Unclaimed Idle 0. 020 511 0+02: 28: 42 coral. cs. wisc LINUX INTEL Claimed Busy 0. 990 511 0+01: 27: 21 doc. cs. wisc. e LINUX INTEL Unclaimed Idle 0. 260 511 0+00: 20: 04 dsonokwa. cs. w LINUX INTEL Claimed Busy 0. 810 511 0+00: 01: 45 ferdinand. cs. LINUX INTEL Claimed Suspended 1. 130 511 0+00: 55 vm [email protected] LINUX INTEL Unclaimed Idle 0. 000 255 0+01: 03: 28 vm [email protected] LINUX INTEL Unclaimed Idle 0. 190 255 0+01: 03: 29 http: //www. cs. wisc. edu/condor 79

Frieda tries out parallel jobs… › MPI Universe & PVM Universe › Schedule and Frieda tries out parallel jobs… › MPI Universe & PVM Universe › Schedule and start an MPICH job on dedicated resources Executable = my-mpi-job Universe = MPI Machine_count = 8 queue http: //www. cs. wisc. edu/condor 80

(Boss Fat Cat) The Boss says Frieda can add her co-workers’ desktop machines into (Boss Fat Cat) The Boss says Frieda can add her co-workers’ desktop machines into her Condor pool as well… but only if they can also submit jobs. http: //www. cs. wisc. edu/condor 81

Layout of the Condor Pool = Process Spawned = Class. Ad Communication Pathway Central Layout of the Condor Pool = Process Spawned = Class. Ad Communication Pathway Central Manager (Frieda’s) master startd schedd negotiator collector Cluster Node master startd Desktop master startd schedd http: //www. cs. wisc. edu/condor 82

Some of the machines in the Pool do not have enough memory or scratch Some of the machines in the Pool do not have enough memory or scratch disk space to run my job! http: //www. cs. wisc. edu/condor 83

Specify Requirements! › An expression (syntax similar to C or Java) › Must evaluate Specify Requirements! › An expression (syntax similar to C or Java) › Must evaluate to True for a match to be made Universe = Executable = Initial. Dir = Requirements Queue 600 vanilla my_job run_$(Process) = Memory >= 256 && Disk > 10000 http: //www. cs. wisc. edu/condor 84

Specify Rank! › All matches which meet the requirements › can be sorted by Specify Rank! › All matches which meet the requirements › can be sorted by preference with a Rank expression. Higher the Rank, the better the match Universe = vanilla Executable = my_job Arguments = -arg 1 –arg 2 Initial. Dir = run_$(Process) Requirements = Memory >= 256 && Disk > 10000 Rank = (KFLOPS*10000) + Memory Queue 600 http: //www. cs. wisc. edu/condor 85

How can my jobs access their data files? http: //www. cs. wisc. edu/condor 86 How can my jobs access their data files? http: //www. cs. wisc. edu/condor 86

Access to Data in Condor › Use Shared Filesystem if available › No shared Access to Data in Condor › Use Shared Filesystem if available › No shared filesystem? h. Condor can transfer files • Automatically send back changed files • Atomic transfer of multiple files h. Standard Universe can use Remote System Calls http: //www. cs. wisc. edu/condor 87

Remote System Calls › I/O System calls trapped and sent back to › submit Remote System Calls › I/O System calls trapped and sent back to › submit machine Allows Transparent Migration Across Administrative Domains h. Checkpoint on machine A, restart on B › No Source Code changes required › Language Independent › Opportunities for Application Steering h. Example: Condor tells customer process “how” to open files http: //www. cs. wisc. edu/condor 88

Job Startup Schedd Starter Shadow Submit Customer Job Condor Syscall Lib http: //www. cs. Job Startup Schedd Starter Shadow Submit Customer Job Condor Syscall Lib http: //www. cs. wisc. edu/condor 89

condor_q -io c 01(69)% condor_q -io -- Submitter: c 01. cs. wisc. edu : condor_q -io c 01(69)% condor_q -io -- Submitter: c 01. cs. wisc. edu : <128. 105. 146. 101: 2996> : c 01. cs. wisc. edu ID OWNER READ WRITE SEEK XPUT BUFSIZE BLKSIZE 72. 3 edayton [ no i/o data collected yet ] 72. 5 edayton 6. 8 MB 0. 0 B 0 104. 0 KB/s 512. 0 KB 32. 0 KB 73. 0 edayton 6. 4 MB 0. 0 B 0 140. 3 KB/s 512. 0 KB 32. 0 KB 73. 2 edayton 6. 8 MB 0. 0 B 0 112. 4 KB/s 512. 0 KB 32. 0 KB 73. 4 edayton 6. 8 MB 0. 0 B 0 139. 3 KB/s 512. 0 KB 32. 0 KB 73. 5 edayton 6. 8 MB 0. 0 B 0 139. 3 KB/s 512. 0 KB 32. 0 KB 73. 7 edayton [ no i/o data collected yet ] 0 jobs; 0 idle, 0 running, 0 held http: //www. cs. wisc. edu/condor 90

Policy Configuration (Boss Fat Cat) I am adding nodes to the Cluster… but the Policy Configuration (Boss Fat Cat) I am adding nodes to the Cluster… but the Engineering Department has priority on these nodes. http: //www. cs. wisc. edu/condor 91

The Machine (Startd) Policy Expressions START – When is this machine willing to start The Machine (Startd) Policy Expressions START – When is this machine willing to start a job RANK - Job Preferences SUSPEND - When to suspend a job CONTINUE - When to continue a suspended job PREEMPT – When to nicely stop running a job KILL - When to immediately kill a preempting job http: //www. cs. wisc. edu/condor 92

Freida’s Current Settings START = True RANK = SUSPEND = False CONTINUE = PREEMPT Freida’s Current Settings START = True RANK = SUSPEND = False CONTINUE = PREEMPT = False KILL = False http: //www. cs. wisc. edu/condor 93

Freida’s New Settings for the Chemistry nodes START = True RANK = Department == Freida’s New Settings for the Chemistry nodes START = True RANK = Department == “Chemistry” SUSPEND = False CONTINUE = PREEMPT = False KILL = False http: //www. cs. wisc. edu/condor 94

Submit file with Custom Attribute Executable = charm-run Universe = standard +Department = Chemistry Submit file with Custom Attribute Executable = charm-run Universe = standard +Department = Chemistry queue http: //www. cs. wisc. edu/condor 95

What if “Department” not specified? START = True RANK = (Department =? = UNDEFINED)*-5 What if “Department” not specified? START = True RANK = (Department =? = UNDEFINED)*-5 + (Department == “Chemistry”)*2 SUSPEND = False CONTINUE = PREEMPT = False KILL = False http: //www. cs. wisc. edu/condor 96

Another example START = True RANK = (Department =? = UNDEFINED)*-5 + (Department == Another example START = True RANK = (Department =? = UNDEFINED)*-5 + (Department == “Chemistry”)*2 + (Department == “Physics”) SUSPEND = False CONTINUE = PREEMPT = False KILL = False http: //www. cs. wisc. edu/condor 97

Policy Configuration, cont (Boss Fat Cat) The Cluster is fine. But not the desktop Policy Configuration, cont (Boss Fat Cat) The Cluster is fine. But not the desktop machines. Condor can only use the desktops when they would otherwise be idle. http: //www. cs. wisc. edu/condor 98

So Frieda decides she wants the desktops to: › START jobs when their has So Frieda decides she wants the desktops to: › START jobs when their has been no › › › activity on the keyboard/mouse for 5 minutes and the load average is low SUSPEND jobs as soon as activity is detected PREEMPT jobs if the activity continues for 5 minutes or more KILL jobs if they take more than 5 minutes to preempt http: //www. cs. wisc. edu/condor 99

Macros in the Config File Non. Condor. Load. Avg = (Load. Avg - Condor. Macros in the Config File Non. Condor. Load. Avg = (Load. Avg - Condor. Load. Avg) Background. Load = 0. 3 High. Load = 0. 5 Keyboard. Busy = (Keyboard. Idle < 10) CPU_Busy = ($(Non. Condor. Load. Avg) >= $(High. Load)) Machine. Busy = ($(CPU_Busy) || $(Keyboard. Busy)) Activity. Timer = (Current. Time Entered. Current. Activity) http: //www. cs. wisc. edu/condor 100

Desktop Machine Policy START = $(CPU_Idle) && Keyboard. Idle > 300 SUSPEND = $(Machine. Desktop Machine Policy START = $(CPU_Idle) && Keyboard. Idle > 300 SUSPEND = $(Machine. Busy) CONTINUE = $(CPU_Idle) && Keyboard. Idle > 120 PREEMPT = (Activity == "Suspended") && $(Activity. Timer) > 300 KILL = $(Activity. Timer) > 300 http: //www. cs. wisc. edu/condor 101

Policy Review › Users submitting jobs can specify › › Requirements and Rank expressions Policy Review › Users submitting jobs can specify › › Requirements and Rank expressions Administrators can specify Startd Policy expressions individually for each machine (Start, Suspend, etc) Expressions can use any job or machine Class. Ad attribute Custom attributes easily added Bottom Line: Enforce almost any policy! http: //www. cs. wisc. edu/condor 102

› › › › › General User Commands condor_status condor_q condor_submit condor_rm condor_prio condor_history › › › › › General User Commands condor_status condor_q condor_submit condor_rm condor_prio condor_history condor_submit_dag condor_checkpoint condor_compile View Pool Status View Job Queue Submit new Jobs Remove Jobs Intra-User Prios Completed Job Info Specify Dependencies Force a checkpoint Link Condor library http: //www. cs. wisc. edu/condor 103

› › › › Administrator Commands condor_vacate condor_on condor_off condor_reconfig condor_config_val condor_userprio condor_stats Leave › › › › Administrator Commands condor_vacate condor_on condor_off condor_reconfig condor_config_val condor_userprio condor_stats Leave a machine now Start Condor Stop Condor Reconfig on-the-fly View/set config User Priorities View detailed usage accounting stats http: //www. cs. wisc. edu/condor 104

Condor. View Usage Graph http: //www. cs. wisc. edu/condor 105 Condor. View Usage Graph http: //www. cs. wisc. edu/condor 105

Outline › › About Condor Frieda the Scientist Managing Jobs Sharing Resources › Expanding Outline › › About Condor Frieda the Scientist Managing Jobs Sharing Resources › Expanding to the Grid › Case Study: DTF › Research Directions http: //www. cs. wisc. edu/condor 106

Back to the Story: Disaster Strikes! Frieda Needs Remote Resources… http: //www. cs. wisc. Back to the Story: Disaster Strikes! Frieda Needs Remote Resources… http: //www. cs. wisc. edu/condor 107

Frieda Goes to the Grid! › First Frieda takes advantage of her › › Frieda Goes to the Grid! › First Frieda takes advantage of her › › Condor friends! She knows people with their own Condor pools, and gets permission to access their resources She then configures her Condor pool to “flock” to these pools flock http: //www. cs. wisc. edu/condor 108

600 Condor jobs personal your Condor Pool workstation Condor Friendly Condor Pool http: //www. 600 Condor jobs personal your Condor Pool workstation Condor Friendly Condor Pool http: //www. cs. wisc. edu/condor 109

How Flocking Works › Add a line to your condor_config : FLOCK_HOSTS = Pool-Foo, How Flocking Works › Add a line to your condor_config : FLOCK_HOSTS = Pool-Foo, Pool-Bar Collector Schedd Collector Negotiator Submit Machine Collector Negotiator Central Manager Pool-Foo Central Manager Pool-Bar Central Manager (CONDOR_HOST) http: //www. cs. wisc. edu/condor 110

Condor Flocking › Remote pools are contacted in the order › specified until jobs Condor Flocking › Remote pools are contacted in the order › specified until jobs are satisfied The list of remote pools is a property of the Schedd, not the Central Manager h. So different users can Flock to different pools h. And remote pools can allow specific users › User-priority system is “flocking-aware” h. A pool’s local users can have priority over remote users “flocking” in. http: //www. cs. wisc. edu/condor 111

Condor Flocking, cont. › Flocking is “Condor” specific technology… › Frieda also has access Condor Flocking, cont. › Flocking is “Condor” specific technology… › Frieda also has access to Globus resources she wants to use h. She has certificates and access to Globus gatekeepers at remote institutions › But Frieda wants Condor’s queue › management features for her Globus jobs! She installs Condor-G so she can submit “Globus Universe” jobs to Condor http: //www. cs. wisc. edu/condor 112

Condor-G: Globus + Condor Globus Condor › middleware deployed across › job scheduling across Condor-G: Globus + Condor Globus Condor › middleware deployed across › job scheduling across › › entire Grid remote access to computational resources dependable, robust data transfer › › multiple resources strong fault tolerance with checkpointing and migration layered over Globus as “personal batch system” for the Grid http: //www. cs. wisc. edu/condor 113

Condor-G Installation: Tell it what you need… 114 Condor-G Installation: Tell it what you need… 114

… and watch it go! 115 … and watch it go! 115

Frieda Submits a Globus Universe Job › In her submit description file, she specifies: Frieda Submits a Globus Universe Job › In her submit description file, she specifies: h. Universe = Globus h. Which Globus Gatekeeper to use h. Optional: Location of file containing your Globus certificate (thanks, Massimo!) universe = globusscheduler = beak. cs. wisc. edu/jobmanager executable = progname queue http: //www. cs. wisc. edu/condor 116

How It Works Personal Condor Globus Resource Schedd LSF http: //www. cs. wisc. edu/condor How It Works Personal Condor Globus Resource Schedd LSF http: //www. cs. wisc. edu/condor 117

600 Globus jobs How It Works Personal Condor Globus Resource Schedd LSF http: //www. 600 Globus jobs How It Works Personal Condor Globus Resource Schedd LSF http: //www. cs. wisc. edu/condor 118

600 Globus jobs How It Works Personal Condor Globus Resource Schedd Grid. Manager LSF 600 Globus jobs How It Works Personal Condor Globus Resource Schedd Grid. Manager LSF http: //www. cs. wisc. edu/condor 119

600 Globus jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. 600 Globus jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. Manager LSF http: //www. cs. wisc. edu/condor 120

600 Globus jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. 600 Globus jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. Manager LSF User Job http: //www. cs. wisc. edu/condor 121

Condor Globus Universe 122 Condor Globus Universe 122

Globus Universe Concerns › What about Fault Tolerance? h Local Crashes • What if Globus Universe Concerns › What about Fault Tolerance? h Local Crashes • What if the submit machine goes down? h Network Outages • What if the connection to the remote Globus jobmanager is lost? h Remote Crashes • What if the remote Globus jobmanager crashes? • What if the remote machine goes down? http: //www. cs. wisc. edu/condor 123

Changes to the Globus Job. Manager for Fault Tolerance › Ability to restart a Changes to the Globus Job. Manager for Fault Tolerance › Ability to restart a Job. Manager › Enhanced two-phase commit submit protocol http: //www. cs. wisc. edu/condor 124

Globus Universe Fault-Tolerance: Submit-side Failures › All relevant state for each submitted job is Globus Universe Fault-Tolerance: Submit-side Failures › All relevant state for each submitted job is › › stored persistently in the Condor job queue. This persistent information allows the Condor Grid. Manager upon restart to read the state information and reconnect to Job. Managers that were running at the time of the crash. If a Job. Manager fails to respond… http: //www. cs. wisc. edu/condor 125

Globus Universe Fault-Tolerance: Lost Contact with Remote Jobmanager Can we contact gatekeeper? Yes - Globus Universe Fault-Tolerance: Lost Contact with Remote Jobmanager Can we contact gatekeeper? Yes - jobmanager crashed No – retry until we can talk to gatekeeper again… Can we reconnect to jobmanager? No – machine crashed or job completed Yes – network was down Restart jobmanager Has job completed? No – is job still running? Yes – update queue http: //www. cs. wisc. edu/condor 126

Globus Universe Fault-Tolerance: Credential Management › Authentication in Globus is done with › › Globus Universe Fault-Tolerance: Credential Management › Authentication in Globus is done with › › › limited-lifetime X 509 proxies Proxy may expire before jobs finish executing Condor can put jobs on hold and email user to refresh proxy Todo: Interface with My. Proxy… http: //www. cs. wisc. edu/condor 127

But Frieda Wants More… › She wants to run standard universe jobs on Globus-managed But Frieda Wants More… › She wants to run standard universe jobs on Globus-managed resources h. For matchmaking and dynamic scheduling of jobs h. For job checkpointing and migration h. For remote system calls http: //www. cs. wisc. edu/condor 128

Solution: Condor Glide. In › Frieda can use the Globus Universe to run › Solution: Condor Glide. In › Frieda can use the Globus Universe to run › › Condor daemons on Globus resources When the resources run these Glide. In jobs, they will temporarily join her Condor Pool She can then submit Standard, Vanilla, PVM, or MPI Universe jobs and they will be matched and run on the Globus resources http: //www. cs. wisc. edu/condor 129

600 Condor jobs How It Works Personal Condor Globus Resource Schedd LSF Collector http: 600 Condor jobs How It Works Personal Condor Globus Resource Schedd LSF Collector http: //www. cs. wisc. edu/condor 130

600 Condor jobs How It Works Personal Condor Globus Resource Schedd LSF Collector Glide. 600 Condor jobs How It Works Personal Condor Globus Resource Schedd LSF Collector Glide. In jobs http: //www. cs. wisc. edu/condor 131

600 Condor jobs How It Works Personal Condor Globus Resource Schedd Grid. Manager LSF 600 Condor jobs How It Works Personal Condor Globus Resource Schedd Grid. Manager LSF Collector Glide. In jobs http: //www. cs. wisc. edu/condor 132

600 Condor jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. 600 Condor jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. Manager LSF Collector Glide. In jobs http: //www. cs. wisc. edu/condor 133

600 Condor jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. 600 Condor jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. Manager LSF Startd Collector Glide. In jobs http: //www. cs. wisc. edu/condor 134

600 Condor jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. 600 Condor jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. Manager LSF Startd Collector Glide. In jobs http: //www. cs. wisc. edu/condor 135

600 Condor jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. 600 Condor jobs How It Works Personal Condor Globus Resource Schedd Job. Manager Grid. Manager LSF Startd Collector User Job Glide. In jobs http: //www. cs. wisc. edu/condor 136

http: //www. cs. wisc. edu/condor 137 http: //www. cs. wisc. edu/condor 137

Glide. In Concerns › What if a Globus resource kills my Glide. In job? Glide. In Concerns › What if a Globus resource kills my Glide. In job? h That resource will disappear from your pool and your jobs will be rescheduled on other machines h Standard universe jobs will resume from their last checkpoint like usual › What if all my jobs are completed before a Glide. In job runs? h If a Glide. In Condor daemon is not matched with a job in 10 minutes, it terminates, freeing the resource http: //www. cs. wisc. edu/condor 138

Common Questions, cont. My Personal Condor is flocking with a bunch of Solaris machines, Common Questions, cont. My Personal Condor is flocking with a bunch of Solaris machines, and also doing a Glide. In to a Silicon Graphics O 2 K. I do not want to statically partition my jobs. Solution: In your submit file, say: Executable = myjob. $$(Op. Sys). $$(Arch) The “$$(xxx)” notation is replaced with attributes from the machine Class. Ad which was matched with your job. http: //www. cs. wisc. edu/condor 139

In Review With Condor Frieda can… h… manage her compute job workload h… access In Review With Condor Frieda can… h… manage her compute job workload h… access local machines h… access remote Condor Pools via flocking h… access remote compute resources on the Grid via Globus Universe jobs h… carve out her own personal Condor Pool from the Grid with Glide. In technology http: //www. cs. wisc. edu/condor 140

Globus Grid 600 Condor jobs personal your Condor Pool workstation Condor PBS LSF glide-in Globus Grid 600 Condor jobs personal your Condor Pool workstation Condor PBS LSF glide-in jobs Friendly Condor Pool Condor http: //www. cs. wisc. edu/condor 141

Outline › › › About Condor Frieda the Scientist Managing Jobs Sharing Resources Expanding Outline › › › About Condor Frieda the Scientist Managing Jobs Sharing Resources Expanding to the Grid › Case Study: DTF › Research Directions http: //www. cs. wisc. edu/condor 142

Leveraging Grid Resources › The Caltech CMS group is using Grid resources today for Leveraging Grid Resources › The Caltech CMS group is using Grid resources today for detector simulation and data processing prototyping › Even during this simulation and prototyping phase the computational and data challenges are substantial… http: //www. cs. wisc. edu/condor 143

Case Study: CMS Production › An ongoing collaboration between: h Physicists & Computer Scientists Case Study: CMS Production › An ongoing collaboration between: h Physicists & Computer Scientists • Vladimir Litvin (Caltech CMS) • Scott Koranda, Bruce Loftis, John Towns (NCSA) • Miron Livny, Peter Couvares, Todd Tannenbaum, Jamie Frey (UW-Madison Condor) h Software • Condor, Globus, CMS http: //www. cs. wisc. edu/condor

CMS Physics The CMS detector at the LHC will probe fundamental forces in our CMS Physics The CMS detector at the LHC will probe fundamental forces in our Universe and search for the yet-undetected Higgs Boson Detector expected to come online 2006 http: //www. cs. wisc. edu/condor 145

CMS Physics http: //www. cs. wisc. edu/condor 146 CMS Physics http: //www. cs. wisc. edu/condor 146

ENORMOUS Data Challenges Ahead › One sec of CMS running will › › equal ENORMOUS Data Challenges Ahead › One sec of CMS running will › › equal data volume equivalent to 10, 000 Encyclopaedia Britannicas Data rate handled by the CMS event builder (~500 Gbit/s) will be equivalent to amount of data currently exchanged by the world's telecom networks Number of processors in the CMS event filter will equal number of workstations at CERN today (~4000) http: //www. cs. wisc. edu/condor 147

Challenges of a CMS Run › CMS run naturally divided › Specific challenges into Challenges of a CMS Run › CMS run naturally divided › Specific challenges into two phases h each run generates ~100 h Monte Carlo detector response simulation • 100’s of jobs per run • each generating ~ 1 GB • all data passed to next phase and archived h physics reconstruction from simulated data • 100’s of jobs per run • jobs coupled via • GB of data to be moved and archived elsewhere h many, many runs necessary h simulation & reconstruction jobs at different sites h this can require major human effort starting & monitoring jobs, moving data Objectivity database access ~100 GB data archived http: //www. cs. wisc. edu/condor 148

CMS Run on the Grid › Caltech CMS staff › › › prepares input CMS Run on the Grid › Caltech CMS staff › › › prepares input files on local workstation Pushes “one button” to submit a DAGMan job to Condor DAGMan job at Caltech submits secondary DAGMan job to UW Condor pool (~700 CPUs) Input files transferred by Condor to UW pool using Globus GASS file transfer Condor DAGMan job running at Caltech workstation Input files via Globus GASS UW Condor pool http: //www. cs. wisc. edu/condor 149

CMS Run on the Grid Condor DAGMan job running at Caltech › Secondary DAGMan CMS Run on the Grid Condor DAGMan job running at Caltech › Secondary DAGMan job launches 100 Monte Carlo jobs on Wisconsin Condor pool h each job runs 12~24 hours h each generates ~1 GB data h Condor handles Globus Secondary Condor DAGMan job on WI pool checkpointing & migration h no staff intervention http: //www. cs. wisc. edu/condor 100 Monte Carlo jobs on Wisconsin Condor pool 150

CMS Run on the Grid 100 Monte Carlo jobs on Wisconsin Condor pool › CMS Run on the Grid 100 Monte Carlo jobs on Wisconsin Condor pool › When each Monte Carlo job completes, data automatically transferred to Uni. Tree at NCSA by a POST script h each file ~ 1 GB h transferred by calling Globus-enabled FTP client “gsiftp” h NCSA Uni. Tree runs Globusenabled FTP server h authentication to FTP server on user’s behalf using digital certificate 100 data files transferred via Globus gsiftp (~ 1 GB each) NCSA Uni. Tree with Globus-enabled FTP server http: //www. cs. wisc. edu/condor 151

CMS Run on the Grid › When all Monte Carlo jobs › Condor DAGMan CMS Run on the Grid › When all Monte Carlo jobs › Condor DAGMan job running at Caltech complete, Condor DAGMan at UW reports success to DAGMan at Caltech submits another Globus-universe job to Condor to stage data from NCSA Uni. Tree to NCSA Linux cluster h data transferred using Globus-enabled FTP h authentication on user’s behalf using digital certificate Secondary DAGMan reports success Secondary Condor DAGMan job on WI pool Condor starts job via Globus jobmanager on cluster to stage data NCSA Linux cluster gsiftp fetches data from Uni. Tree http: //www. cs. wisc. edu/condor 152

CMS Run on the Grid › Condor DAGMan at Master Condor job running at CMS Run on the Grid › Condor DAGMan at Master Condor job running at Caltech launches physics reconstruction jobs on NCSA Linux cluster h job launched via Globus jobmanager on NCSA cluster h no user intervention required h authentication on user’s behalf using digital certificate Master starts reconstruction jobs via Globus jobmanager on cluster NCSA Linux cluster http: //www. cs. wisc. edu/condor 153

CMS Run on the Grid › When reconstruction jobs NCSA Linux cluster at NCSA CMS Run on the Grid › When reconstruction jobs NCSA Linux cluster at NCSA complete, data automatically archived to NCSA Uni. Tree h data transferred using data files transferred via Globus gsiftp to Uni. Tree for archiving Globus-enabled FTP › After data transferred, DAGMan run is complete, and Condor at Caltech emails notification to staff http: //www. cs. wisc. edu/condor 154

CMS Run Details › Condor + Globus › allows Condor to submit › › CMS Run Details › Condor + Globus › allows Condor to submit › › › jobs to remote host via a Globus jobmanager any Globus-enabled host reachable (with authorization) Condor jobs run in the “Globus” universe use familiar Condor classads for submitting jobs universe = globusscheduler = beak. cs. wisc. edu/jobmanagercondor-INTEL-LINUX environment = CONDOR_UNIVERSE=scheduler executable arguments = CMS/condor_dagman_run = -f -t -l. -Lockfile cms. lock -Condorlog cms. log -Dag cms. dag -Rescue cms. rescue input = CMS/hg_90. tar. gz remote_initialdir = Prod 2001 output error log = CMS/hg_90. out = CMS/hg_90. err = CMS/condor. log notification = always queue http: //www. cs. wisc. edu/condor 155

CMS Run Details › At Caltech, DAGMan › ensures reconstruction job B runs only CMS Run Details › At Caltech, DAGMan › ensures reconstruction job B runs only after simulation job A completes successfully & data is transferred At UW, no job dependencies, but DAGMan POST scripts used to stage out data # Caltech: main. dag Job job. A_632 Prod 2000/hg_90_gen_632. cdr Job job. B_632 Prod 2000/hg_90_sim_632. cdr Script pre job. A_632 Prod 2000/pre_632. csh Script post job. B_632 Prod 2000/post_632. csh PARENT job. A_632 CHILD job. B_632 # UW: simulation. dag Job sim_0. cdr Script post sim_0 post_0. csh Job sim_1. cdr Script post sim_1 post_1. csh #. . . Job sim_98 sim 98. cdr Script post sim_98 post_98. csh Job sim_99. cdr Script post sim_99 post_99. csh http: //www. cs. wisc. edu/condor 156

Future Directions Master Condor job running at Caltech › Include additional sites in both Future Directions Master Condor job running at Caltech › Include additional sites in both steps: h allow Monte Carlo jobs at Wisconsin to “glidein” to Grid sites not running Condor h add path so that physics reconstruction jobs may run on other sites in addition to NCSA cluster Secondary Condor job on WI pool 25 Monte Carlo jobs on Los. Lobos via Condor glide-in 75 Monte Carlo jobs on Wisconsin Condor pool http: //www. cs. wisc. edu/condor 157

1) Submit DAGMan to Condor DAGMan running at Caltech 2) Launch secondary DAGMan job 1) Submit DAGMan to Condor DAGMan running at Caltech 2) Launch secondary DAGMan job on UW pool; input files via Globus GASS 5) UW DAGMan reports success to Caltech DAGMan Caltech workstation 6) DAGMan starts reconstruction jobs via Globus jobmanager on cluster Secondary Condor DAGMan job on UW pool 3) Monte Carlo jobs on UW Condor pool 9) Reconstruction job reports success to DAGMan 7) gsiftp fetches data from Uni. Tree 4) data files transferred via gsiftp, ~ 1 GB each NCSA Linux cluster OR UNM Linux Cluster 8) Processed objectivity database stored to Uni. Tree NCSA Uni. Tree Globus-enabled FTP server http: //www. cs. wisc. edu/condor 158

Outline › › › About Condor Frieda the Scientist Managing Jobs Sharing Resources Expanding Outline › › › About Condor Frieda the Scientist Managing Jobs Sharing Resources Expanding to the Grid Case Study: DTF › Research Directions http: //www. cs. wisc. edu/condor 159

Research Directions › Storage needs management too! h. Discover, claim, use, release, monitor. . Research Directions › Storage needs management too! h. Discover, claim, use, release, monitor. . . › Grid communities. . . h. Bring storage and cpus together. › Components: h. Ne. ST provides storage management. h. Bypass enables transparent access. h. Advanced Class. Ads are the glue. http: //www. cs. wisc. edu/condor 160

Frieda is Back! › › › Frieda is on sabbatical in Italy. Database stored Frieda is Back! › › › Frieda is on sabbatical in Italy. Database stored in Bologna Need to run 300 instances of simulator. But, all the machines are in Wisconsin! What to do? http: //www. cs. wisc. edu/condor 161

Hmmm… http: //www. cs. wisc. edu/condor 162 Hmmm… http: //www. cs. wisc. edu/condor 162

New framework needed › Remote I/O is possible anywhere › Build notion of locality New framework needed › Remote I/O is possible anywhere › Build notion of locality into system? › What are possibilities? h. Move job to data h. Move data to job h. Allow job to access data remotely › Need framework to expose these policies http: //www. cs. wisc. edu/condor 163

Grid Communities › A meeting place for many › › resources and users. A Grid Communities › A meeting place for many › › resources and users. A structure for reasoning about complex systems. A natural expression of locality between cpus and storage. http: //www. cs. wisc. edu/condor 164

Grid Communities UW INFN http: //www. cs. wisc. edu/condor 165 Grid Communities UW INFN http: //www. cs. wisc. edu/condor 165

Key elements › Storage appliance, interposition › › agents, schedulers and match-makers Mechanism not Key elements › Storage appliance, interposition › › agents, schedulers and match-makers Mechanism not policies Policies are exposed to an upper layer h. We will however demonstrate the strength of this mechanism http: //www. cs. wisc. edu/condor 166

Storage appliances › Should run without special privilege h. Flexible and easily deployable h. Storage appliances › Should run without special privilege h. Flexible and easily deployable h. Acceptable to nervous sys admins › Should allow multiple access modes h. Low latency local accesses h. High bandwidth remote puts and gets http: //www. cs. wisc. edu/condor 167

Ne. ST GFTP Chirp HTTP FTP Common protocol layer Dispatcher Control flow Data flow Ne. ST GFTP Chirp HTTP FTP Common protocol layer Dispatcher Control flow Data flow Storage Manager Transfer Manager Multiple concurrencies Physical storage layer http: //www. cs. wisc. edu/condor 168

Interposition agents › Thin software layer interposed › › between application and OS Allow Interposition agents › Thin software layer interposed › › between application and OS Allow applications to transparently interact with storage appliances Unmodified programs can run in grid environment http: //www. cs. wisc. edu/condor 169

PFS: Pluggable File System http: //www. cs. wisc. edu/condor 170 PFS: Pluggable File System http: //www. cs. wisc. edu/condor 170

Scheduling systems and discovery › Top level scheduler needs ability to › discover diverse Scheduling systems and discovery › Top level scheduler needs ability to › discover diverse resources CPU discovery h. Where can a job run? › Device discovery h. Where is my local storage appliance? › Replica discovery h. Where can I find my data? http: //www. cs. wisc. edu/condor 171

Match-making › Match-making is the glue which brings › discovery systems together Allows participants Match-making › Match-making is the glue which brings › discovery systems together Allows participants to indirectly identify each other hi. e. can locate resources without explicitly naming them http: //www. cs. wisc. edu/condor 172

Three way matching Job Ad Job match Refers to Nearest. Storage. Knows where Nearest. Three way matching Job Ad Job match Refers to Nearest. Storage. Knows where Nearest. Storage is. Machine Ad Machine Storage Ad Ne. ST http: //www. cs. wisc. edu/condor 173

Two way Class. Ads Type = “job” Target. Type = “machine” Cmd = “sim. Two way Class. Ads Type = “job” Target. Type = “machine” Cmd = “sim. exe” Owner = “thain” Requirements = (Op. Sys==“linux”) Type = “machine” Target. Type = “job” Op. Sys = “linux” Requirements = (Owner==“thain”) Machine Class. Ad Job Class. Ad http: //www. cs. wisc. edu/condor 174

Three way Class. Ads Type = “job” Target. Type = “machine” Cmd = “sim. Three way Class. Ads Type = “job” Target. Type = “machine” Cmd = “sim. exe” Owner = “thain” Requirements = (Op. Sys==“linux”) && Nearest. Storage. Has. CMSData Type = “machine” Target. Type = “job” Op. Sys = “linux” Requirements = (Owner==“thain”) Nearest. Storage = ( Name = “turkey”) && (Type==“Storage”) Machine Class. Ad Job Class. Ad Type = “storage” Name = “turkey. cs. wisc. edu” Has. CMSData = true CMSData. Path = /cmsdata” Storage Class. Ad http: //www. cs. wisc. edu/condor 175

BOOM! http: //www. cs. wisc. edu/condor 176 BOOM! http: //www. cs. wisc. edu/condor 176

CMS simulator sample run › › Frieda’s jobs have high I/O : CPU ratio CMS simulator sample run › › Frieda’s jobs have high I/O : CPU ratio Access about 20 MB from 300 MB database Write about 1 MB of output ~160 seconds execution time hon a 600 MIPS machine with local disk http: //www. cs. wisc. edu/condor 177

To infinity and beyond › Speedups of 2. 5 x possible when we › To infinity and beyond › Speedups of 2. 5 x possible when we › are able to use locality intelligently This will continue to be important h. Data sets are getting larger and larger h. There will always be bottlenecks http: //www. cs. wisc. edu/condor 178

I/O Communities UW INFN http: //www. cs. wisc. edu/condor 179 I/O Communities UW INFN http: //www. cs. wisc. edu/condor 179

Two Grid Communities › INFN Condor pool h 236 machines, about 30 available at Two Grid Communities › INFN Condor pool h 236 machines, about 30 available at any one time h. Wide range of machines and networks spread across Italy h. Storage appliance in Bologna • 750 MIPS, 378 MB RAM http: //www. cs. wisc. edu/condor 180

Two Grid communities › UW Condor pool h~900 machines, 100 dedicated for us h. Two Grid communities › UW Condor pool h~900 machines, 100 dedicated for us h. Each is 600 MIPS, 512 MB RAM h. Networked on 100 Mb/s switch h. One was used as a storage appliance http: //www. cs. wisc. edu/condor 181

Policy specification › Run only with locality h Requirements = (Nearest. Storage. Has. CMSData) Policy specification › Run only with locality h Requirements = (Nearest. Storage. Has. CMSData) › Run in only one particular community h Requirements = (Nearest. Storage. Name == “nestore. bologna”) › Prefer home community first h Requirements = (Nearest. Storage. Has. CMSData) h Rank = (Nearest. Storage. Name == “nestore. bologna” ) ? 10 : 0 › Arbitrarily complex h Requirements = ( Nearest. Storage. Name == “nestore. bologna”) || ( Clock. Hour < 7 ) || ( Clock. Hour > 18 ) http: //www. cs. wisc. edu/condor 182

Policies evaluated › › › › INFN local UW remote UW stage first UW Policies evaluated › › › › INFN local UW remote UW stage first UW local (pre-staged) INFN local, UW remote INFN local, UW stage INFN local, UW local http: //www. cs. wisc. edu/condor 183

Completion Time http: //www. cs. wisc. edu/condor 184 Completion Time http: //www. cs. wisc. edu/condor 184

CPU Efficiency http: //www. cs. wisc. edu/condor 185 CPU Efficiency http: //www. cs. wisc. edu/condor 185

Future work › Automation of locality specification h. Configuration of communities h. Dynamically adjust Future work › Automation of locality specification h. Configuration of communities h. Dynamically adjust size as load dictates › Automation of scheduling policy h. Selection of movement policy h. Add storage appliances as necessary http: //www. cs. wisc. edu/condor 186

Lessons from I/O Communities › I/O communities expose locality policies › Users can increase Lessons from I/O Communities › I/O communities expose locality policies › Users can increase throughput › Owners can maximize resource utilization http: //www. cs. wisc. edu/condor 187

Wrap Up › Condor… h…empowers ordinary users; h…can harness resources globally; h…keeps everyone happy Wrap Up › Condor… h…empowers ordinary users; h…can harness resources globally; h…keeps everyone happy with matchmaking; h…is flexible, reliable, and proven. › Condor powers the Grid! http: //www. cs. wisc. edu/condor 188

Condor at HPDC › John Bent, “Flexibility, Manageability, and Performance in a Grid Storage Condor at HPDC › John Bent, “Flexibility, Manageability, and Performance in a Grid Storage Appliance” h. Wednesday, 1630, Session I › Douglas Thain, “Error Scope on a Computational Grid: Theory and Practice” h. Thursday, 1330, Session VII http: //www. cs. wisc. edu/condor 189

Thank you! Check us out on the Web: http: //www. cs. wisc. edu/condor Email: Thank you! Check us out on the Web: http: //www. cs. wisc. edu/condor Email: [email protected] wisc. edu http: //www. cs. wisc. edu/condor 190