commit
04a7e20392
1 changed files with 62 additions and 0 deletions
@ -0,0 +1,62 @@ |
|||
<br>Name: pi Namespace: default Selector: batch.kubernetes.io/ controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c Labels: batch.kubernetes.io/ controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c batch.kubernetes.io/ job-name=pi ... Annotations: batch.kubernetes.io/ job-tracking: "" Parallelism: 1 Completions: 1 Start Time: Mon, 02 Dec 2019 15:20:11 +0200 Completed At: Mon, 02 Dec 2019 15:21:16 +0200 Duration: 65s Pods Statuses: 0 Running/ 1 Succeeded/ 0 Failed Pod Template: Labels: batch.kubernetes.io/ controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c batch.kubernetes.io/ job-name=pi Containers: pi: Image: perl:5.34.0 Port: Host Port: Command: perl -Mbignum=bpi -wle print bpi( 2000) Environment: Mounts: Volumes: Events: Type Reason Age From Message-- ---------- ------- Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4 Normal Completed 18s job-controller Job finished<br>[bing.com](https://www.bing.com/ck/a?%21&&p=b3d361b0825540a06825edd5d429c81aa89564384bcdcbea1f53769de9d6519fJmltdHM9MTc1MTUwMDgwMA&ptn=3&ver=2&hsh=4&fclid=34311c92-4aea-69b0-1472-0a8c4b066855&u=a1aHR0cHM6Ly93d3cubW9uc3Rlci5jb20vP21zb2NraWQ9MzQzMTFjOTI0YWVhNjliMDE0NzIwYThjNGIwNjY4NTU&ntb=1) |
|||
<br>apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/ job-tracking: "" ... creationTimestamp: "2022-11-10T17:53:53 Z" generation: 1 labels: batch.kubernetes.io/ controller-uid: 863452e6-270d-420e-9b94-53a54146c223 batch.kubernetes.io/ job-name: pi name: pi namespace: default resourceVersion: "4751" uid: 204fb678-040b-497f-9266-35ffa8716d14 specification: backoffLimit: 4 completionMode: NonIndexed completions: 1 parallelism: 1 selector: matchLabels: batch.kubernetes.io/ controller-uid: 863452e6-270d-420e-9b94-53a54146c223 suspend: incorrect design template: metadata: creationTimestamp: null labels: batch.kubernetes.io/ controller-uid: 863452e6-270d-420e-9b94-53a54146c223 batch.kubernetes.io/ job-name: pi spec: containers: - command: - perl - -Mbignum=bpi - -wle - print bpi( 2000) image: perl:5.34.0 imagePullPolicy: IfNotPresent name: pi resources: terminationMessagePath:/ dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Never schedulerName: default-scheduler securityContext: terminationGracePeriodSeconds: 30 status: active: 1 all set: 0 startTime: "2022-11-10T17:53:57 Z" uncountedTerminatedPods: <br> |
|||
<br>To view finished Pods of a Job, utilize kubectl get pods.<br> |
|||
<br>To note all the Pods that come from a Job in a maker understandable form, you can use a command like this:<br> |
|||
<br>Here, the selector is the same as the selector for the Job. The-- output=jsonpath alternative specifies an expression with the name from each Pod in the returned list.<br> |
|||
<br>View the basic output of among the pods:<br> |
|||
<br>Another way to view the logs of a Task:<br> |
|||
<br>The output is comparable to this:<br> |
|||
<br>Writing a Task spec<br> |
|||
<br>Similar to all other Kubernetes config, a Job requires apiVersion, kind, and metadata fields.<br> |
|||
<br>When the control plane produces brand-new Pods for a Task, the.metadata.name of the Job belongs to the basis for naming those Pods. The name of a Job need to be a legitimate DNS subdomain worth, but this can produce unforeseen results for the Pod hostnames. For finest compatibility, the name ought to follow the more restrictive rules for a DNS label. Even when the name is a DNS subdomain, the name must be no longer than 63 characters.<br> |
|||
<br>A Task likewise requires a.spec area.<br> |
|||
<br>Job Labels<br> |
|||
<br>Job labels will have batch.kubernetes.io/ prefix for job-name and controller-uid.<br> |
|||
<br>Pod Template<br> |
|||
<br>The.spec.template is the only necessary field of the.spec.<br> |
|||
<br>The.spec.template is a pod design template. It has exactly the same schema as a Pod, other than it is nested and does not have an apiVersion or kind.<br> |
|||
<br>In addition to required fields for a Pod, a pod template in a Job must define suitable labels (see pod selector) and a proper reboot policy.<br> |
|||
<br>Only a RestartPolicy equal to Never or OnFailure is enabled.<br> |
|||
<br>Pod selector<br> |
|||
<br>The.spec.selector field is optional. In practically all cases you should not specify it. See area specifying your own pod selector.<br> |
|||
<br>Parallel execution for [Jobs](https://sharingopportunities.com/)<br> |
|||
<br>There are 3 main kinds of task appropriate to run as a Task:<br> |
|||
<br>1. Non-parallel Jobs- normally, just one Pod is started, unless the Pod fails. |
|||
<br>- the Job is total as quickly as its Pod ends successfully. |
|||
<br><br> |
|||
<br><br>2. Parallel Jobs with a fixed conclusion count:- specify a non-zero positive value for.spec.completions. |
|||
<br>- the Job represents the overall job, and is total when there are.spec.completions successful Pods. |
|||
<br>- when using.spec.completionMode="Indexed", each Pod gets a various index in the variety 0 to.spec.completions-1. |
|||
<br><br> |
|||
<br><br>3. Parallel [Jobs](https://sharingopportunities.com/) with a work queue:- do not specify.spec.completions, default to.spec.parallelism. |
|||
<br>- the Pods must collaborate among themselves or an external service to identify what each need to deal with. For instance, a Pod might fetch a batch of as much as N products from the work queue. |
|||
<br>- each Pod is individually capable of identifying whether all its peers are done, and thus that the entire Job is done. |
|||
<br>- when any Pod from the Job ends with success, no brand-new Pods are developed. |
|||
<br>- as soon as at least one Pod has actually ended with success and all Pods are ended, then the Job is completed with success. |
|||
<br>- when any Pod has left with success, no other Pod needs to still be doing any work for this job or writing any output. They need to all remain in the procedure of exiting. |
|||
<br><br> |
|||
<br><br> |
|||
For a Job, you can leave both.spec.completions and.spec.parallelism unset. When both are unset, both are defaulted to 1.<br> |
|||
<br>For a fixed completion count Job, you need to set.spec.completions to the number of completions needed. You can set.spec.parallelism, or leave it unset and it will default to 1.<br> |
|||
<br>For a work line Job, you should leave.spec.completions unset, and set.spec.parallelism to a non-negative integer.<br> |
|||
<br>To find out more about how to use the various kinds of task, see the job patterns section.<br> |
|||
<br>Controlling parallelism<br> |
|||
<br>The requested parallelism (. spec.parallelism) can be set to any non-negative value. If it is unspecified, it defaults to 1. If it is defined as 0, then the Job is effectively stopped briefly up until it is increased.<br> |
|||
<br>Actual parallelism (number of pods running at any immediate) might be more or less than asked for parallelism, for a variety of factors:<br> |
|||
<br>- For fixed conclusion count [Jobs](https://sharingopportunities.com/), the real number of pods running in parallel will not go beyond the number of staying completions. Higher values of.spec.parallelism are effectively neglected. |
|||
<br>- For work line Jobs, no brand-new Pods are begun after any Pod has been successful-- remaining Pods are permitted to finish, however. |
|||
<br>- If the Job Controller has not had time to react. |
|||
<br>- If the Job controller stopped working to produce Pods for any factor (absence of ResourceQuota, lack of permission, and so on), then there might be fewer pods than asked for. |
|||
<br>- The Job controller may throttle new Pod production due to extreme previous pod failures in the same Job. |
|||
<br>- When a Pod is with dignity shut down, it requires time to stop. |
|||
<br> |
|||
Completion mode<br> |
|||
<br>Jobs with set completion count - that is, jobs that have non null.spec.completions - can have a conclusion mode that is specified in.spec.completionMode:<br> |
|||
<br>NonIndexed (default): the Job is thought about complete when there have been.spec.completions successfully finished Pods. Simply put, each Pod completion is homologous to each other. Note that Jobs that have null.spec.completions are implicitly NonIndexed.<br>[bing.com](https://www.bing.com/ck/a?%21&&p=b3d361b0825540a06825edd5d429c81aa89564384bcdcbea1f53769de9d6519fJmltdHM9MTc1MTUwMDgwMA&ptn=3&ver=2&hsh=4&fclid=34311c92-4aea-69b0-1472-0a8c4b066855&u=a1aHR0cHM6Ly93d3cubW9uc3Rlci5jb20vP21zb2NraWQ9MzQzMTFjOTI0YWVhNjliMDE0NzIwYThjNGIwNjY4NTU&ntb=1) |
|||
<br><br>Indexed: the Pods of a Task get an associated completion index from 0 to.spec.completions-1. The index is offered through four systems:<br> |
|||
<br>- The Pod annotation batch.kubernetes.io/ job-completion-index. |
|||
<br>- The Pod label batch.kubernetes.io/ job-completion-index (for v1.28 and later). Note the function gate PodIndexLabel should be enabled to use this label, and it is enabled by default. |
|||
<br>- As part of the Pod hostname, following the pattern $(job-name)-$(index). When you use an Indexed Job in mix with a Service, Pods within the Job can utilize the deterministic hostnames to address each other via DNS. For more info about how to configure this, see Job with Pod-to-Pod Communication. |
|||
<br>- From the containerized task, in the environment variable JOB_COMPLETION_INDEX. |
|||
<br> |
|||
The Job is thought about complete when there is one effectively completed Pod for each index. To learn more about how to use this mode, see Indexed Job for Parallel Processing with Static Work Assignment.<br> |
|||
Write
Preview
Loading…
Cancel
Save
Reference in new issue