Assigning specific CPU resources to pod - kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request...





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







0















I've created an elasticsearch service to apply it like backend to jaeger tracing, using this guide, all over Kubernetes GCP cluster.



I have the elasticsearch service:



~/w/jaeger-elasticsearch ❯❯❯ kubectl get service elasticsearch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 8m
~/w/jaeger-elasticsearch ❯❯❯


And their respective pod called elasticsearch-0



~/w/jaeger-elasticsearch ❯❯❯ kubectl get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-0 1/1 Running 0 37m
jaeger-agent-cnw9m 1/1 Running 0 2h
jaeger-agent-dl5n9 1/1 Running 0 2h
jaeger-agent-zzljk 1/1 Running 0 2h
jaeger-collector-9879cd76-fvpz4 1/1 Running 0 2h
jaeger-query-5584576487-dzqkd 1/1 Running 0 2h
~/w/jaeger-elasticsearch ❯❯❯ kubectl get pod elasticsearch-0
NAME READY STATUS RESTARTS AGE
elasticsearch-0 1/1 Running 0 38m
~/w/jaeger-elasticsearch ❯❯❯


I've enter to my pod configuration on GCP, and I can see that my elasticsearch-0 pod have limited resources:



apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
elasticsearch'
creationTimestamp: 2019-01-03T09:11:10Z
generateName: elasticsearch-


And then, I want to assign it specific CPU request and CPU limit according to the documentation, and then, I proceed to modufy the pod manifest, adding the following directives:



- cpu "2" in the args section:



args:
- -cpus
- "2"


And I am including a resources:requests field in the container resource, in order to specify a request of 0.5 CPU and I've include a resources:limits in order to specify a CPU limit of this way:



  limits:
cpu: "1"
requests:
cpu: "0.5"


My complete pod manifest is this (See numerals 1,2,3,4 and 5 numerals commented with # symbol):



apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
elasticsearch'
creationTimestamp: 2019-01-03T09:11:10Z
generateName: elasticsearch-
labels:
app: jaeger-elasticsearch
controller-revision-hash: elasticsearch-8684f69799
jaeger-infra: elasticsearch-replica
statefulset.kubernetes.io/pod-name: elasticsearch-0
name: elasticsearch-0
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: elasticsearch
uid: 86578784-0f36-11e9-b8b1-42010aa60019
resourceVersion: "2778"
selfLink: /api/v1/namespaces/default/pods/elasticsearch-0
uid: 82d3be2f-0f37-11e9-b8b1-42010aa60019
spec:
containers:
- args:
- -Ehttp.host=0.0.0.0
- -Etransport.host=127.0.0.1
- -cpus # 1
- "2" # 2
command:
- bin/elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
imagePullPolicy: Always
name: elasticsearch
readinessProbe:
exec:
command:
- curl
- --fail
- --silent
- --output
- /dev/null
- --user
- elastic:changeme
- localhost:9200
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 4
resources: # 3
limits:
cpu: "1" # 4
requests:
cpu: "0.5" # 5
# container has a request of 0.5 CPU
#cpu: 100m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-96vwj
readOnly: true
dnsPolicy: ClusterFirst
hostname: elasticsearch-0
nodeName: gke-jaeger-persistent-st-default-pool-81004235-h8xt
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
subdomain: elasticsearch
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- emptyDir: {}
name: data
- name: default-token-96vwj
secret:
defaultMode: 420
secretName: default-token-96vwj
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2019-01-03T09:11:10Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2019-01-03T09:11:40Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2019-01-03T09:11:10Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
imageID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7
lastState: {}
name: elasticsearch
ready: true
restartCount: 0
state:
running:
startedAt: 2019-01-03T09:11:13Z
hostIP: 10.166.0.2
phase: Running
podIP: 10.36.0.10
qosClass: Burstable
startTime: 2019-01-03T09:11:10Z


But when I apply my pod manifest file, I get the following output:



Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
.
.
.
for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
~/w/jaeger-elasticsearch ❯❯❯


The complete output of my kubectl apply command is this:



~/w/jaeger-elasticsearch ❯❯❯ kubectl apply -f elasticsearch-0.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubernetes.io/limit-ranger":"LimitRanger plugin set: cpu request for container elasticsearch"},"creationTimestamp":"2019-01-03T09:11:10Z","generateName":"elasticsearch-","labels":{"app":"jaeger-elasticsearch","controller-revision-hash":"elasticsearch-8684f69799","jaeger-infra":"elasticsearch-replica","statefulset.kubernetes.io/pod-name":"elasticsearch-0"},"name":"elasticsearch-0","namespace":"default","ownerReferences":[{"apiVersion":"apps/v1","blockOwnerDeletion":true,"controller":true,"kind":"StatefulSet","name":"elasticsearch","uid":"86578784-0f36-11e9-b8b1-42010aa60019"}],"resourceVersion":"2778","selfLink":"/api/v1/namespaces/default/pods/elasticsearch-0","uid":"82d3be2f-0f37-11e9-b8b1-42010aa60019"},"spec":{"containers":[{"args":["-Ehttp.host=0.0.0.0","-Etransport.host=127.0.0.1","-cpus","2"],"command":["bin/elasticsearch"],"image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imagePullPolicy":"Always","name":"elasticsearch","readinessProbe":{"exec":{"command":["curl","--fail","--silent","--output","/dev/null","--user","elastic:changeme","localhost:9200"]},"failureThreshold":3,"initialDelaySeconds":5,"periodSeconds":5,"successThreshold":1,"timeoutSeconds":4},"resources":{"limits":{"cpu":"1"},"requests":{"cpu":"0.5"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/data","name":"data"},{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"default-token-96vwj","readOnly":true}]}],"dnsPolicy":"ClusterFirst","hostname":"elasticsearch-0","nodeName":"gke-jaeger-persistent-st-default-pool-81004235-h8xt","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"default","serviceAccountName":"default","subdomain":"elasticsearch","terminationGracePeriodSeconds":30,"tolerations":[{"effect":"NoExecute","key":"node.kubernetes.io/not-ready","operator":"Exists","tolerationSeconds":300},{"effect":"NoExecute","key":"node.kubernetes.io/unreachable","operator":"Exists","tolerationSeconds":300}],"volumes":[{"emptyDir":{},"name":"data"},{"name":"default-token-96vwj","secret":{"defaultMode":420,"secretName":"default-token-96vwj"}}]},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:10Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:40Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:10Z","status":"True","type":"PodScheduled"}],"containerStatuses":[{"containerID":"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030","image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7","lastState":{},"name":"elasticsearch","ready":true,"restartCount":0,"state":{"running":{"startedAt":"2019-01-03T09:11:13Z"}}}],"hostIP":"10.166.0.2","phase":"Running","podIP":"10.36.0.10","qosClass":"Burstable","startTime":"2019-01-03T09:11:10Z"}}n"},"creationTimestamp":"2019-01-03T09:11:10Z","resourceVersion":"2778","uid":"82d3be2f-0f37-11e9-b8b1-42010aa60019"},"spec":{"$setElementOrder/containers":[{"name":"elasticsearch"}],"containers":[{"args":["-Ehttp.host=0.0.0.0","-Etransport.host=127.0.0.1","-cpus","2"],"name":"elasticsearch","resources":{"limits":{"cpu":"1"},"requests":{"cpu":"0.5"}}}]},"status":{"$setElementOrder/conditions":[{"type":"Initialized"},{"type":"Ready"},{"type":"PodScheduled"}],"conditions":[{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"Initialized"},{"lastTransitionTime":"2019-01-03T09:11:40Z","type":"Ready"},{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"PodScheduled"}],"containerStatuses":[{"containerID":"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030","image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7","lastState":{},"name":"elasticsearch","ready":true,"restartCount":0,"state":{"running":{"startedAt":"2019-01-03T09:11:13Z"}}}],"podIP":"10.36.0.10","startTime":"2019-01-03T09:11:10Z"}}
to:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "elasticsearch-0", Namespace: "default"
Object: &{map["kind":"Pod" "apiVersion":"v1" "metadata":map["selfLink":"/api/v1/namespaces/default/pods/elasticsearch-0" "generateName":"elasticsearch-" "namespace":"default" "resourceVersion":"11515" "creationTimestamp":"2019-01-03T10:29:53Z""labels":map["controller-revision-hash":"elasticsearch-8684f69799" "jaeger-infra":"elasticsearch-replica" "statefulset.kubernetes.io/pod-name":"elasticsearch-0" "app":"jaeger-elasticsearch"] "annotations":map["kubernetes.io/limit-ranger":"LimitRanger plugin set: cpu request for container elasticsearch"] "ownerReferences":[map["controller":%!q(bool=true) "blockOwnerDeletion":%!q(bool=true) "apiVersion":"apps/v1" "kind":"StatefulSet" "name":"elasticsearch" "uid":"86578784-0f36-11e9-b8b1-42010aa60019"]] "name":"elasticsearch-0" "uid":"81cba2ad-0f42-11e9-b8b1-42010aa60019"] "spec":map["restartPolicy":"Always" "terminationGracePeriodSeconds":'x1e' "serviceAccountName":"default" "securityContext":map "subdomain":"elasticsearch" "schedulerName":"default-scheduler" "tolerations":[map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'u012c' "key":"node.kubernetes.io/not-ready"] map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'u012c' "key":"node.kubernetes.io/unreachable"]] "volumes":[map["name":"data" "emptyDir":map] map["name":"default-token-96vwj" "secret":map["secretName":"default-token-96vwj" "defaultMode":'u01a4']]] "dnsPolicy":"ClusterFirst" "serviceAccount":"default" "nodeName":"gke-jaeger-persistent-st-default-pool-81004235-h8xt" "hostname":"elasticsearch-0" "containers":[map["name":"elasticsearch" "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "readinessProbe":map["exec":map["command":["curl" "--fail" "--silent" "--output" "/dev/null" "--user" "elastic:changeme" "localhost:9200"]] "initialDelaySeconds":'x05' "timeoutSeconds":'x04' "periodSeconds":'x05' "successThreshold":'x01' "failureThreshold":'x03'] "terminationMessagePath":"/dev/termination-log" "imagePullPolicy":"Always" "command":["bin/elasticsearch"] "args":["-Ehttp.host=0.0.0.0" "-Etransport.host=127.0.0.1"] "resources":map["requests":map["cpu":"100m"]] "volumeMounts":[map["name":"data" "mountPath":"/data"] map["name":"default-token-96vwj" "readOnly":%!q(bool=true) "mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"]] "terminationMessagePolicy":"File"]]] "status":map["qosClass":"Burstable" "phase":"Running" "conditions":[map["type":"Initialized" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"] map["type":"Ready" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:30:17Z"] map["type":"PodScheduled" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"]] "hostIP":"10.166.0.2" "podIP":"10.36.0.11" "startTime":"2019-01-03T10:29:53Z" "containerStatuses":[map["name":"elasticsearch" "state":map["running":map["startedAt":"2019-01-03T10:29:55Z"]] "lastState":map "ready":%!q(bool=true) "restartCount":'x00' "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7" "containerID":"docker://e7f629b79da33b482b38fdb990717b3d61d114503961302e2e8feccb213bbd4b"]]]]}
for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
~/w/jaeger-elasticsearch ❯❯❯


How to can I modify my pod yaml file in order to assign it more resources and solve the kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container elasticsearch' message?










share|improve this question





























    0















    I've created an elasticsearch service to apply it like backend to jaeger tracing, using this guide, all over Kubernetes GCP cluster.



    I have the elasticsearch service:



    ~/w/jaeger-elasticsearch ❯❯❯ kubectl get service elasticsearch
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 8m
    ~/w/jaeger-elasticsearch ❯❯❯


    And their respective pod called elasticsearch-0



    ~/w/jaeger-elasticsearch ❯❯❯ kubectl get pods
    NAME READY STATUS RESTARTS AGE
    elasticsearch-0 1/1 Running 0 37m
    jaeger-agent-cnw9m 1/1 Running 0 2h
    jaeger-agent-dl5n9 1/1 Running 0 2h
    jaeger-agent-zzljk 1/1 Running 0 2h
    jaeger-collector-9879cd76-fvpz4 1/1 Running 0 2h
    jaeger-query-5584576487-dzqkd 1/1 Running 0 2h
    ~/w/jaeger-elasticsearch ❯❯❯ kubectl get pod elasticsearch-0
    NAME READY STATUS RESTARTS AGE
    elasticsearch-0 1/1 Running 0 38m
    ~/w/jaeger-elasticsearch ❯❯❯


    I've enter to my pod configuration on GCP, and I can see that my elasticsearch-0 pod have limited resources:



    apiVersion: v1
    kind: Pod
    metadata:
    annotations:
    kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
    elasticsearch'
    creationTimestamp: 2019-01-03T09:11:10Z
    generateName: elasticsearch-


    And then, I want to assign it specific CPU request and CPU limit according to the documentation, and then, I proceed to modufy the pod manifest, adding the following directives:



    - cpu "2" in the args section:



    args:
    - -cpus
    - "2"


    And I am including a resources:requests field in the container resource, in order to specify a request of 0.5 CPU and I've include a resources:limits in order to specify a CPU limit of this way:



      limits:
    cpu: "1"
    requests:
    cpu: "0.5"


    My complete pod manifest is this (See numerals 1,2,3,4 and 5 numerals commented with # symbol):



    apiVersion: v1
    kind: Pod
    metadata:
    annotations:
    kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
    elasticsearch'
    creationTimestamp: 2019-01-03T09:11:10Z
    generateName: elasticsearch-
    labels:
    app: jaeger-elasticsearch
    controller-revision-hash: elasticsearch-8684f69799
    jaeger-infra: elasticsearch-replica
    statefulset.kubernetes.io/pod-name: elasticsearch-0
    name: elasticsearch-0
    namespace: default
    ownerReferences:
    - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: elasticsearch
    uid: 86578784-0f36-11e9-b8b1-42010aa60019
    resourceVersion: "2778"
    selfLink: /api/v1/namespaces/default/pods/elasticsearch-0
    uid: 82d3be2f-0f37-11e9-b8b1-42010aa60019
    spec:
    containers:
    - args:
    - -Ehttp.host=0.0.0.0
    - -Etransport.host=127.0.0.1
    - -cpus # 1
    - "2" # 2
    command:
    - bin/elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
    imagePullPolicy: Always
    name: elasticsearch
    readinessProbe:
    exec:
    command:
    - curl
    - --fail
    - --silent
    - --output
    - /dev/null
    - --user
    - elastic:changeme
    - localhost:9200
    failureThreshold: 3
    initialDelaySeconds: 5
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 4
    resources: # 3
    limits:
    cpu: "1" # 4
    requests:
    cpu: "0.5" # 5
    # container has a request of 0.5 CPU
    #cpu: 100m
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /data
    name: data
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    name: default-token-96vwj
    readOnly: true
    dnsPolicy: ClusterFirst
    hostname: elasticsearch-0
    nodeName: gke-jaeger-persistent-st-default-pool-81004235-h8xt
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
    subdomain: elasticsearch
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
    - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
    volumes:
    - emptyDir: {}
    name: data
    - name: default-token-96vwj
    secret:
    defaultMode: 420
    secretName: default-token-96vwj
    status:
    conditions:
    - lastProbeTime: null
    lastTransitionTime: 2019-01-03T09:11:10Z
    status: "True"
    type: Initialized
    - lastProbeTime: null
    lastTransitionTime: 2019-01-03T09:11:40Z
    status: "True"
    type: Ready
    - lastProbeTime: null
    lastTransitionTime: 2019-01-03T09:11:10Z
    status: "True"
    type: PodScheduled
    containerStatuses:
    - containerID: docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
    imageID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7
    lastState: {}
    name: elasticsearch
    ready: true
    restartCount: 0
    state:
    running:
    startedAt: 2019-01-03T09:11:13Z
    hostIP: 10.166.0.2
    phase: Running
    podIP: 10.36.0.10
    qosClass: Burstable
    startTime: 2019-01-03T09:11:10Z


    But when I apply my pod manifest file, I get the following output:



    Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
    Error from server (Conflict): error when applying patch:
    .
    .
    .
    for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
    ~/w/jaeger-elasticsearch ❯❯❯


    The complete output of my kubectl apply command is this:



    ~/w/jaeger-elasticsearch ❯❯❯ kubectl apply -f elasticsearch-0.yaml
    Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
    Error from server (Conflict): error when applying patch:
    {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubernetes.io/limit-ranger":"LimitRanger plugin set: cpu request for container elasticsearch"},"creationTimestamp":"2019-01-03T09:11:10Z","generateName":"elasticsearch-","labels":{"app":"jaeger-elasticsearch","controller-revision-hash":"elasticsearch-8684f69799","jaeger-infra":"elasticsearch-replica","statefulset.kubernetes.io/pod-name":"elasticsearch-0"},"name":"elasticsearch-0","namespace":"default","ownerReferences":[{"apiVersion":"apps/v1","blockOwnerDeletion":true,"controller":true,"kind":"StatefulSet","name":"elasticsearch","uid":"86578784-0f36-11e9-b8b1-42010aa60019"}],"resourceVersion":"2778","selfLink":"/api/v1/namespaces/default/pods/elasticsearch-0","uid":"82d3be2f-0f37-11e9-b8b1-42010aa60019"},"spec":{"containers":[{"args":["-Ehttp.host=0.0.0.0","-Etransport.host=127.0.0.1","-cpus","2"],"command":["bin/elasticsearch"],"image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imagePullPolicy":"Always","name":"elasticsearch","readinessProbe":{"exec":{"command":["curl","--fail","--silent","--output","/dev/null","--user","elastic:changeme","localhost:9200"]},"failureThreshold":3,"initialDelaySeconds":5,"periodSeconds":5,"successThreshold":1,"timeoutSeconds":4},"resources":{"limits":{"cpu":"1"},"requests":{"cpu":"0.5"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/data","name":"data"},{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"default-token-96vwj","readOnly":true}]}],"dnsPolicy":"ClusterFirst","hostname":"elasticsearch-0","nodeName":"gke-jaeger-persistent-st-default-pool-81004235-h8xt","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"default","serviceAccountName":"default","subdomain":"elasticsearch","terminationGracePeriodSeconds":30,"tolerations":[{"effect":"NoExecute","key":"node.kubernetes.io/not-ready","operator":"Exists","tolerationSeconds":300},{"effect":"NoExecute","key":"node.kubernetes.io/unreachable","operator":"Exists","tolerationSeconds":300}],"volumes":[{"emptyDir":{},"name":"data"},{"name":"default-token-96vwj","secret":{"defaultMode":420,"secretName":"default-token-96vwj"}}]},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:10Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:40Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:10Z","status":"True","type":"PodScheduled"}],"containerStatuses":[{"containerID":"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030","image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7","lastState":{},"name":"elasticsearch","ready":true,"restartCount":0,"state":{"running":{"startedAt":"2019-01-03T09:11:13Z"}}}],"hostIP":"10.166.0.2","phase":"Running","podIP":"10.36.0.10","qosClass":"Burstable","startTime":"2019-01-03T09:11:10Z"}}n"},"creationTimestamp":"2019-01-03T09:11:10Z","resourceVersion":"2778","uid":"82d3be2f-0f37-11e9-b8b1-42010aa60019"},"spec":{"$setElementOrder/containers":[{"name":"elasticsearch"}],"containers":[{"args":["-Ehttp.host=0.0.0.0","-Etransport.host=127.0.0.1","-cpus","2"],"name":"elasticsearch","resources":{"limits":{"cpu":"1"},"requests":{"cpu":"0.5"}}}]},"status":{"$setElementOrder/conditions":[{"type":"Initialized"},{"type":"Ready"},{"type":"PodScheduled"}],"conditions":[{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"Initialized"},{"lastTransitionTime":"2019-01-03T09:11:40Z","type":"Ready"},{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"PodScheduled"}],"containerStatuses":[{"containerID":"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030","image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7","lastState":{},"name":"elasticsearch","ready":true,"restartCount":0,"state":{"running":{"startedAt":"2019-01-03T09:11:13Z"}}}],"podIP":"10.36.0.10","startTime":"2019-01-03T09:11:10Z"}}
    to:
    Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
    Name: "elasticsearch-0", Namespace: "default"
    Object: &{map["kind":"Pod" "apiVersion":"v1" "metadata":map["selfLink":"/api/v1/namespaces/default/pods/elasticsearch-0" "generateName":"elasticsearch-" "namespace":"default" "resourceVersion":"11515" "creationTimestamp":"2019-01-03T10:29:53Z""labels":map["controller-revision-hash":"elasticsearch-8684f69799" "jaeger-infra":"elasticsearch-replica" "statefulset.kubernetes.io/pod-name":"elasticsearch-0" "app":"jaeger-elasticsearch"] "annotations":map["kubernetes.io/limit-ranger":"LimitRanger plugin set: cpu request for container elasticsearch"] "ownerReferences":[map["controller":%!q(bool=true) "blockOwnerDeletion":%!q(bool=true) "apiVersion":"apps/v1" "kind":"StatefulSet" "name":"elasticsearch" "uid":"86578784-0f36-11e9-b8b1-42010aa60019"]] "name":"elasticsearch-0" "uid":"81cba2ad-0f42-11e9-b8b1-42010aa60019"] "spec":map["restartPolicy":"Always" "terminationGracePeriodSeconds":'x1e' "serviceAccountName":"default" "securityContext":map "subdomain":"elasticsearch" "schedulerName":"default-scheduler" "tolerations":[map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'u012c' "key":"node.kubernetes.io/not-ready"] map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'u012c' "key":"node.kubernetes.io/unreachable"]] "volumes":[map["name":"data" "emptyDir":map] map["name":"default-token-96vwj" "secret":map["secretName":"default-token-96vwj" "defaultMode":'u01a4']]] "dnsPolicy":"ClusterFirst" "serviceAccount":"default" "nodeName":"gke-jaeger-persistent-st-default-pool-81004235-h8xt" "hostname":"elasticsearch-0" "containers":[map["name":"elasticsearch" "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "readinessProbe":map["exec":map["command":["curl" "--fail" "--silent" "--output" "/dev/null" "--user" "elastic:changeme" "localhost:9200"]] "initialDelaySeconds":'x05' "timeoutSeconds":'x04' "periodSeconds":'x05' "successThreshold":'x01' "failureThreshold":'x03'] "terminationMessagePath":"/dev/termination-log" "imagePullPolicy":"Always" "command":["bin/elasticsearch"] "args":["-Ehttp.host=0.0.0.0" "-Etransport.host=127.0.0.1"] "resources":map["requests":map["cpu":"100m"]] "volumeMounts":[map["name":"data" "mountPath":"/data"] map["name":"default-token-96vwj" "readOnly":%!q(bool=true) "mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"]] "terminationMessagePolicy":"File"]]] "status":map["qosClass":"Burstable" "phase":"Running" "conditions":[map["type":"Initialized" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"] map["type":"Ready" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:30:17Z"] map["type":"PodScheduled" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"]] "hostIP":"10.166.0.2" "podIP":"10.36.0.11" "startTime":"2019-01-03T10:29:53Z" "containerStatuses":[map["name":"elasticsearch" "state":map["running":map["startedAt":"2019-01-03T10:29:55Z"]] "lastState":map "ready":%!q(bool=true) "restartCount":'x00' "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7" "containerID":"docker://e7f629b79da33b482b38fdb990717b3d61d114503961302e2e8feccb213bbd4b"]]]]}
    for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
    ~/w/jaeger-elasticsearch ❯❯❯


    How to can I modify my pod yaml file in order to assign it more resources and solve the kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container elasticsearch' message?










    share|improve this question

























      0












      0








      0








      I've created an elasticsearch service to apply it like backend to jaeger tracing, using this guide, all over Kubernetes GCP cluster.



      I have the elasticsearch service:



      ~/w/jaeger-elasticsearch ❯❯❯ kubectl get service elasticsearch
      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 8m
      ~/w/jaeger-elasticsearch ❯❯❯


      And their respective pod called elasticsearch-0



      ~/w/jaeger-elasticsearch ❯❯❯ kubectl get pods
      NAME READY STATUS RESTARTS AGE
      elasticsearch-0 1/1 Running 0 37m
      jaeger-agent-cnw9m 1/1 Running 0 2h
      jaeger-agent-dl5n9 1/1 Running 0 2h
      jaeger-agent-zzljk 1/1 Running 0 2h
      jaeger-collector-9879cd76-fvpz4 1/1 Running 0 2h
      jaeger-query-5584576487-dzqkd 1/1 Running 0 2h
      ~/w/jaeger-elasticsearch ❯❯❯ kubectl get pod elasticsearch-0
      NAME READY STATUS RESTARTS AGE
      elasticsearch-0 1/1 Running 0 38m
      ~/w/jaeger-elasticsearch ❯❯❯


      I've enter to my pod configuration on GCP, and I can see that my elasticsearch-0 pod have limited resources:



      apiVersion: v1
      kind: Pod
      metadata:
      annotations:
      kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
      elasticsearch'
      creationTimestamp: 2019-01-03T09:11:10Z
      generateName: elasticsearch-


      And then, I want to assign it specific CPU request and CPU limit according to the documentation, and then, I proceed to modufy the pod manifest, adding the following directives:



      - cpu "2" in the args section:



      args:
      - -cpus
      - "2"


      And I am including a resources:requests field in the container resource, in order to specify a request of 0.5 CPU and I've include a resources:limits in order to specify a CPU limit of this way:



        limits:
      cpu: "1"
      requests:
      cpu: "0.5"


      My complete pod manifest is this (See numerals 1,2,3,4 and 5 numerals commented with # symbol):



      apiVersion: v1
      kind: Pod
      metadata:
      annotations:
      kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
      elasticsearch'
      creationTimestamp: 2019-01-03T09:11:10Z
      generateName: elasticsearch-
      labels:
      app: jaeger-elasticsearch
      controller-revision-hash: elasticsearch-8684f69799
      jaeger-infra: elasticsearch-replica
      statefulset.kubernetes.io/pod-name: elasticsearch-0
      name: elasticsearch-0
      namespace: default
      ownerReferences:
      - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: StatefulSet
      name: elasticsearch
      uid: 86578784-0f36-11e9-b8b1-42010aa60019
      resourceVersion: "2778"
      selfLink: /api/v1/namespaces/default/pods/elasticsearch-0
      uid: 82d3be2f-0f37-11e9-b8b1-42010aa60019
      spec:
      containers:
      - args:
      - -Ehttp.host=0.0.0.0
      - -Etransport.host=127.0.0.1
      - -cpus # 1
      - "2" # 2
      command:
      - bin/elasticsearch
      image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
      imagePullPolicy: Always
      name: elasticsearch
      readinessProbe:
      exec:
      command:
      - curl
      - --fail
      - --silent
      - --output
      - /dev/null
      - --user
      - elastic:changeme
      - localhost:9200
      failureThreshold: 3
      initialDelaySeconds: 5
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 4
      resources: # 3
      limits:
      cpu: "1" # 4
      requests:
      cpu: "0.5" # 5
      # container has a request of 0.5 CPU
      #cpu: 100m
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /data
      name: data
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-96vwj
      readOnly: true
      dnsPolicy: ClusterFirst
      hostname: elasticsearch-0
      nodeName: gke-jaeger-persistent-st-default-pool-81004235-h8xt
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: default
      serviceAccountName: default
      subdomain: elasticsearch
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
      - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
      volumes:
      - emptyDir: {}
      name: data
      - name: default-token-96vwj
      secret:
      defaultMode: 420
      secretName: default-token-96vwj
      status:
      conditions:
      - lastProbeTime: null
      lastTransitionTime: 2019-01-03T09:11:10Z
      status: "True"
      type: Initialized
      - lastProbeTime: null
      lastTransitionTime: 2019-01-03T09:11:40Z
      status: "True"
      type: Ready
      - lastProbeTime: null
      lastTransitionTime: 2019-01-03T09:11:10Z
      status: "True"
      type: PodScheduled
      containerStatuses:
      - containerID: docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030
      image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
      imageID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7
      lastState: {}
      name: elasticsearch
      ready: true
      restartCount: 0
      state:
      running:
      startedAt: 2019-01-03T09:11:13Z
      hostIP: 10.166.0.2
      phase: Running
      podIP: 10.36.0.10
      qosClass: Burstable
      startTime: 2019-01-03T09:11:10Z


      But when I apply my pod manifest file, I get the following output:



      Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
      Error from server (Conflict): error when applying patch:
      .
      .
      .
      for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
      ~/w/jaeger-elasticsearch ❯❯❯


      The complete output of my kubectl apply command is this:



      ~/w/jaeger-elasticsearch ❯❯❯ kubectl apply -f elasticsearch-0.yaml
      Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
      Error from server (Conflict): error when applying patch:
      {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubernetes.io/limit-ranger":"LimitRanger plugin set: cpu request for container elasticsearch"},"creationTimestamp":"2019-01-03T09:11:10Z","generateName":"elasticsearch-","labels":{"app":"jaeger-elasticsearch","controller-revision-hash":"elasticsearch-8684f69799","jaeger-infra":"elasticsearch-replica","statefulset.kubernetes.io/pod-name":"elasticsearch-0"},"name":"elasticsearch-0","namespace":"default","ownerReferences":[{"apiVersion":"apps/v1","blockOwnerDeletion":true,"controller":true,"kind":"StatefulSet","name":"elasticsearch","uid":"86578784-0f36-11e9-b8b1-42010aa60019"}],"resourceVersion":"2778","selfLink":"/api/v1/namespaces/default/pods/elasticsearch-0","uid":"82d3be2f-0f37-11e9-b8b1-42010aa60019"},"spec":{"containers":[{"args":["-Ehttp.host=0.0.0.0","-Etransport.host=127.0.0.1","-cpus","2"],"command":["bin/elasticsearch"],"image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imagePullPolicy":"Always","name":"elasticsearch","readinessProbe":{"exec":{"command":["curl","--fail","--silent","--output","/dev/null","--user","elastic:changeme","localhost:9200"]},"failureThreshold":3,"initialDelaySeconds":5,"periodSeconds":5,"successThreshold":1,"timeoutSeconds":4},"resources":{"limits":{"cpu":"1"},"requests":{"cpu":"0.5"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/data","name":"data"},{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"default-token-96vwj","readOnly":true}]}],"dnsPolicy":"ClusterFirst","hostname":"elasticsearch-0","nodeName":"gke-jaeger-persistent-st-default-pool-81004235-h8xt","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"default","serviceAccountName":"default","subdomain":"elasticsearch","terminationGracePeriodSeconds":30,"tolerations":[{"effect":"NoExecute","key":"node.kubernetes.io/not-ready","operator":"Exists","tolerationSeconds":300},{"effect":"NoExecute","key":"node.kubernetes.io/unreachable","operator":"Exists","tolerationSeconds":300}],"volumes":[{"emptyDir":{},"name":"data"},{"name":"default-token-96vwj","secret":{"defaultMode":420,"secretName":"default-token-96vwj"}}]},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:10Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:40Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:10Z","status":"True","type":"PodScheduled"}],"containerStatuses":[{"containerID":"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030","image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7","lastState":{},"name":"elasticsearch","ready":true,"restartCount":0,"state":{"running":{"startedAt":"2019-01-03T09:11:13Z"}}}],"hostIP":"10.166.0.2","phase":"Running","podIP":"10.36.0.10","qosClass":"Burstable","startTime":"2019-01-03T09:11:10Z"}}n"},"creationTimestamp":"2019-01-03T09:11:10Z","resourceVersion":"2778","uid":"82d3be2f-0f37-11e9-b8b1-42010aa60019"},"spec":{"$setElementOrder/containers":[{"name":"elasticsearch"}],"containers":[{"args":["-Ehttp.host=0.0.0.0","-Etransport.host=127.0.0.1","-cpus","2"],"name":"elasticsearch","resources":{"limits":{"cpu":"1"},"requests":{"cpu":"0.5"}}}]},"status":{"$setElementOrder/conditions":[{"type":"Initialized"},{"type":"Ready"},{"type":"PodScheduled"}],"conditions":[{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"Initialized"},{"lastTransitionTime":"2019-01-03T09:11:40Z","type":"Ready"},{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"PodScheduled"}],"containerStatuses":[{"containerID":"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030","image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7","lastState":{},"name":"elasticsearch","ready":true,"restartCount":0,"state":{"running":{"startedAt":"2019-01-03T09:11:13Z"}}}],"podIP":"10.36.0.10","startTime":"2019-01-03T09:11:10Z"}}
      to:
      Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
      Name: "elasticsearch-0", Namespace: "default"
      Object: &{map["kind":"Pod" "apiVersion":"v1" "metadata":map["selfLink":"/api/v1/namespaces/default/pods/elasticsearch-0" "generateName":"elasticsearch-" "namespace":"default" "resourceVersion":"11515" "creationTimestamp":"2019-01-03T10:29:53Z""labels":map["controller-revision-hash":"elasticsearch-8684f69799" "jaeger-infra":"elasticsearch-replica" "statefulset.kubernetes.io/pod-name":"elasticsearch-0" "app":"jaeger-elasticsearch"] "annotations":map["kubernetes.io/limit-ranger":"LimitRanger plugin set: cpu request for container elasticsearch"] "ownerReferences":[map["controller":%!q(bool=true) "blockOwnerDeletion":%!q(bool=true) "apiVersion":"apps/v1" "kind":"StatefulSet" "name":"elasticsearch" "uid":"86578784-0f36-11e9-b8b1-42010aa60019"]] "name":"elasticsearch-0" "uid":"81cba2ad-0f42-11e9-b8b1-42010aa60019"] "spec":map["restartPolicy":"Always" "terminationGracePeriodSeconds":'x1e' "serviceAccountName":"default" "securityContext":map "subdomain":"elasticsearch" "schedulerName":"default-scheduler" "tolerations":[map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'u012c' "key":"node.kubernetes.io/not-ready"] map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'u012c' "key":"node.kubernetes.io/unreachable"]] "volumes":[map["name":"data" "emptyDir":map] map["name":"default-token-96vwj" "secret":map["secretName":"default-token-96vwj" "defaultMode":'u01a4']]] "dnsPolicy":"ClusterFirst" "serviceAccount":"default" "nodeName":"gke-jaeger-persistent-st-default-pool-81004235-h8xt" "hostname":"elasticsearch-0" "containers":[map["name":"elasticsearch" "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "readinessProbe":map["exec":map["command":["curl" "--fail" "--silent" "--output" "/dev/null" "--user" "elastic:changeme" "localhost:9200"]] "initialDelaySeconds":'x05' "timeoutSeconds":'x04' "periodSeconds":'x05' "successThreshold":'x01' "failureThreshold":'x03'] "terminationMessagePath":"/dev/termination-log" "imagePullPolicy":"Always" "command":["bin/elasticsearch"] "args":["-Ehttp.host=0.0.0.0" "-Etransport.host=127.0.0.1"] "resources":map["requests":map["cpu":"100m"]] "volumeMounts":[map["name":"data" "mountPath":"/data"] map["name":"default-token-96vwj" "readOnly":%!q(bool=true) "mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"]] "terminationMessagePolicy":"File"]]] "status":map["qosClass":"Burstable" "phase":"Running" "conditions":[map["type":"Initialized" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"] map["type":"Ready" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:30:17Z"] map["type":"PodScheduled" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"]] "hostIP":"10.166.0.2" "podIP":"10.36.0.11" "startTime":"2019-01-03T10:29:53Z" "containerStatuses":[map["name":"elasticsearch" "state":map["running":map["startedAt":"2019-01-03T10:29:55Z"]] "lastState":map "ready":%!q(bool=true) "restartCount":'x00' "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7" "containerID":"docker://e7f629b79da33b482b38fdb990717b3d61d114503961302e2e8feccb213bbd4b"]]]]}
      for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
      ~/w/jaeger-elasticsearch ❯❯❯


      How to can I modify my pod yaml file in order to assign it more resources and solve the kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container elasticsearch' message?










      share|improve this question














      I've created an elasticsearch service to apply it like backend to jaeger tracing, using this guide, all over Kubernetes GCP cluster.



      I have the elasticsearch service:



      ~/w/jaeger-elasticsearch ❯❯❯ kubectl get service elasticsearch
      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 8m
      ~/w/jaeger-elasticsearch ❯❯❯


      And their respective pod called elasticsearch-0



      ~/w/jaeger-elasticsearch ❯❯❯ kubectl get pods
      NAME READY STATUS RESTARTS AGE
      elasticsearch-0 1/1 Running 0 37m
      jaeger-agent-cnw9m 1/1 Running 0 2h
      jaeger-agent-dl5n9 1/1 Running 0 2h
      jaeger-agent-zzljk 1/1 Running 0 2h
      jaeger-collector-9879cd76-fvpz4 1/1 Running 0 2h
      jaeger-query-5584576487-dzqkd 1/1 Running 0 2h
      ~/w/jaeger-elasticsearch ❯❯❯ kubectl get pod elasticsearch-0
      NAME READY STATUS RESTARTS AGE
      elasticsearch-0 1/1 Running 0 38m
      ~/w/jaeger-elasticsearch ❯❯❯


      I've enter to my pod configuration on GCP, and I can see that my elasticsearch-0 pod have limited resources:



      apiVersion: v1
      kind: Pod
      metadata:
      annotations:
      kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
      elasticsearch'
      creationTimestamp: 2019-01-03T09:11:10Z
      generateName: elasticsearch-


      And then, I want to assign it specific CPU request and CPU limit according to the documentation, and then, I proceed to modufy the pod manifest, adding the following directives:



      - cpu "2" in the args section:



      args:
      - -cpus
      - "2"


      And I am including a resources:requests field in the container resource, in order to specify a request of 0.5 CPU and I've include a resources:limits in order to specify a CPU limit of this way:



        limits:
      cpu: "1"
      requests:
      cpu: "0.5"


      My complete pod manifest is this (See numerals 1,2,3,4 and 5 numerals commented with # symbol):



      apiVersion: v1
      kind: Pod
      metadata:
      annotations:
      kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
      elasticsearch'
      creationTimestamp: 2019-01-03T09:11:10Z
      generateName: elasticsearch-
      labels:
      app: jaeger-elasticsearch
      controller-revision-hash: elasticsearch-8684f69799
      jaeger-infra: elasticsearch-replica
      statefulset.kubernetes.io/pod-name: elasticsearch-0
      name: elasticsearch-0
      namespace: default
      ownerReferences:
      - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: StatefulSet
      name: elasticsearch
      uid: 86578784-0f36-11e9-b8b1-42010aa60019
      resourceVersion: "2778"
      selfLink: /api/v1/namespaces/default/pods/elasticsearch-0
      uid: 82d3be2f-0f37-11e9-b8b1-42010aa60019
      spec:
      containers:
      - args:
      - -Ehttp.host=0.0.0.0
      - -Etransport.host=127.0.0.1
      - -cpus # 1
      - "2" # 2
      command:
      - bin/elasticsearch
      image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
      imagePullPolicy: Always
      name: elasticsearch
      readinessProbe:
      exec:
      command:
      - curl
      - --fail
      - --silent
      - --output
      - /dev/null
      - --user
      - elastic:changeme
      - localhost:9200
      failureThreshold: 3
      initialDelaySeconds: 5
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 4
      resources: # 3
      limits:
      cpu: "1" # 4
      requests:
      cpu: "0.5" # 5
      # container has a request of 0.5 CPU
      #cpu: 100m
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /data
      name: data
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-96vwj
      readOnly: true
      dnsPolicy: ClusterFirst
      hostname: elasticsearch-0
      nodeName: gke-jaeger-persistent-st-default-pool-81004235-h8xt
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: default
      serviceAccountName: default
      subdomain: elasticsearch
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
      - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
      volumes:
      - emptyDir: {}
      name: data
      - name: default-token-96vwj
      secret:
      defaultMode: 420
      secretName: default-token-96vwj
      status:
      conditions:
      - lastProbeTime: null
      lastTransitionTime: 2019-01-03T09:11:10Z
      status: "True"
      type: Initialized
      - lastProbeTime: null
      lastTransitionTime: 2019-01-03T09:11:40Z
      status: "True"
      type: Ready
      - lastProbeTime: null
      lastTransitionTime: 2019-01-03T09:11:10Z
      status: "True"
      type: PodScheduled
      containerStatuses:
      - containerID: docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030
      image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
      imageID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7
      lastState: {}
      name: elasticsearch
      ready: true
      restartCount: 0
      state:
      running:
      startedAt: 2019-01-03T09:11:13Z
      hostIP: 10.166.0.2
      phase: Running
      podIP: 10.36.0.10
      qosClass: Burstable
      startTime: 2019-01-03T09:11:10Z


      But when I apply my pod manifest file, I get the following output:



      Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
      Error from server (Conflict): error when applying patch:
      .
      .
      .
      for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
      ~/w/jaeger-elasticsearch ❯❯❯


      The complete output of my kubectl apply command is this:



      ~/w/jaeger-elasticsearch ❯❯❯ kubectl apply -f elasticsearch-0.yaml
      Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
      Error from server (Conflict): error when applying patch:
      {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubernetes.io/limit-ranger":"LimitRanger plugin set: cpu request for container elasticsearch"},"creationTimestamp":"2019-01-03T09:11:10Z","generateName":"elasticsearch-","labels":{"app":"jaeger-elasticsearch","controller-revision-hash":"elasticsearch-8684f69799","jaeger-infra":"elasticsearch-replica","statefulset.kubernetes.io/pod-name":"elasticsearch-0"},"name":"elasticsearch-0","namespace":"default","ownerReferences":[{"apiVersion":"apps/v1","blockOwnerDeletion":true,"controller":true,"kind":"StatefulSet","name":"elasticsearch","uid":"86578784-0f36-11e9-b8b1-42010aa60019"}],"resourceVersion":"2778","selfLink":"/api/v1/namespaces/default/pods/elasticsearch-0","uid":"82d3be2f-0f37-11e9-b8b1-42010aa60019"},"spec":{"containers":[{"args":["-Ehttp.host=0.0.0.0","-Etransport.host=127.0.0.1","-cpus","2"],"command":["bin/elasticsearch"],"image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imagePullPolicy":"Always","name":"elasticsearch","readinessProbe":{"exec":{"command":["curl","--fail","--silent","--output","/dev/null","--user","elastic:changeme","localhost:9200"]},"failureThreshold":3,"initialDelaySeconds":5,"periodSeconds":5,"successThreshold":1,"timeoutSeconds":4},"resources":{"limits":{"cpu":"1"},"requests":{"cpu":"0.5"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/data","name":"data"},{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"default-token-96vwj","readOnly":true}]}],"dnsPolicy":"ClusterFirst","hostname":"elasticsearch-0","nodeName":"gke-jaeger-persistent-st-default-pool-81004235-h8xt","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"default","serviceAccountName":"default","subdomain":"elasticsearch","terminationGracePeriodSeconds":30,"tolerations":[{"effect":"NoExecute","key":"node.kubernetes.io/not-ready","operator":"Exists","tolerationSeconds":300},{"effect":"NoExecute","key":"node.kubernetes.io/unreachable","operator":"Exists","tolerationSeconds":300}],"volumes":[{"emptyDir":{},"name":"data"},{"name":"default-token-96vwj","secret":{"defaultMode":420,"secretName":"default-token-96vwj"}}]},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:10Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:40Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2019-01-03T09:11:10Z","status":"True","type":"PodScheduled"}],"containerStatuses":[{"containerID":"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030","image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7","lastState":{},"name":"elasticsearch","ready":true,"restartCount":0,"state":{"running":{"startedAt":"2019-01-03T09:11:13Z"}}}],"hostIP":"10.166.0.2","phase":"Running","podIP":"10.36.0.10","qosClass":"Burstable","startTime":"2019-01-03T09:11:10Z"}}n"},"creationTimestamp":"2019-01-03T09:11:10Z","resourceVersion":"2778","uid":"82d3be2f-0f37-11e9-b8b1-42010aa60019"},"spec":{"$setElementOrder/containers":[{"name":"elasticsearch"}],"containers":[{"args":["-Ehttp.host=0.0.0.0","-Etransport.host=127.0.0.1","-cpus","2"],"name":"elasticsearch","resources":{"limits":{"cpu":"1"},"requests":{"cpu":"0.5"}}}]},"status":{"$setElementOrder/conditions":[{"type":"Initialized"},{"type":"Ready"},{"type":"PodScheduled"}],"conditions":[{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"Initialized"},{"lastTransitionTime":"2019-01-03T09:11:40Z","type":"Ready"},{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"PodScheduled"}],"containerStatuses":[{"containerID":"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030","image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7","lastState":{},"name":"elasticsearch","ready":true,"restartCount":0,"state":{"running":{"startedAt":"2019-01-03T09:11:13Z"}}}],"podIP":"10.36.0.10","startTime":"2019-01-03T09:11:10Z"}}
      to:
      Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
      Name: "elasticsearch-0", Namespace: "default"
      Object: &{map["kind":"Pod" "apiVersion":"v1" "metadata":map["selfLink":"/api/v1/namespaces/default/pods/elasticsearch-0" "generateName":"elasticsearch-" "namespace":"default" "resourceVersion":"11515" "creationTimestamp":"2019-01-03T10:29:53Z""labels":map["controller-revision-hash":"elasticsearch-8684f69799" "jaeger-infra":"elasticsearch-replica" "statefulset.kubernetes.io/pod-name":"elasticsearch-0" "app":"jaeger-elasticsearch"] "annotations":map["kubernetes.io/limit-ranger":"LimitRanger plugin set: cpu request for container elasticsearch"] "ownerReferences":[map["controller":%!q(bool=true) "blockOwnerDeletion":%!q(bool=true) "apiVersion":"apps/v1" "kind":"StatefulSet" "name":"elasticsearch" "uid":"86578784-0f36-11e9-b8b1-42010aa60019"]] "name":"elasticsearch-0" "uid":"81cba2ad-0f42-11e9-b8b1-42010aa60019"] "spec":map["restartPolicy":"Always" "terminationGracePeriodSeconds":'x1e' "serviceAccountName":"default" "securityContext":map "subdomain":"elasticsearch" "schedulerName":"default-scheduler" "tolerations":[map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'u012c' "key":"node.kubernetes.io/not-ready"] map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'u012c' "key":"node.kubernetes.io/unreachable"]] "volumes":[map["name":"data" "emptyDir":map] map["name":"default-token-96vwj" "secret":map["secretName":"default-token-96vwj" "defaultMode":'u01a4']]] "dnsPolicy":"ClusterFirst" "serviceAccount":"default" "nodeName":"gke-jaeger-persistent-st-default-pool-81004235-h8xt" "hostname":"elasticsearch-0" "containers":[map["name":"elasticsearch" "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "readinessProbe":map["exec":map["command":["curl" "--fail" "--silent" "--output" "/dev/null" "--user" "elastic:changeme" "localhost:9200"]] "initialDelaySeconds":'x05' "timeoutSeconds":'x04' "periodSeconds":'x05' "successThreshold":'x01' "failureThreshold":'x03'] "terminationMessagePath":"/dev/termination-log" "imagePullPolicy":"Always" "command":["bin/elasticsearch"] "args":["-Ehttp.host=0.0.0.0" "-Etransport.host=127.0.0.1"] "resources":map["requests":map["cpu":"100m"]] "volumeMounts":[map["name":"data" "mountPath":"/data"] map["name":"default-token-96vwj" "readOnly":%!q(bool=true) "mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"]] "terminationMessagePolicy":"File"]]] "status":map["qosClass":"Burstable" "phase":"Running" "conditions":[map["type":"Initialized" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"] map["type":"Ready" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:30:17Z"] map["type":"PodScheduled" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"]] "hostIP":"10.166.0.2" "podIP":"10.36.0.11" "startTime":"2019-01-03T10:29:53Z" "containerStatuses":[map["name":"elasticsearch" "state":map["running":map["startedAt":"2019-01-03T10:29:55Z"]] "lastState":map "ready":%!q(bool=true) "restartCount":'x00' "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7" "containerID":"docker://e7f629b79da33b482b38fdb990717b3d61d114503961302e2e8feccb213bbd4b"]]]]}
      for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
      ~/w/jaeger-elasticsearch ❯❯❯


      How to can I modify my pod yaml file in order to assign it more resources and solve the kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container elasticsearch' message?







      elasticsearch kubernetes yaml google-kubernetes-engine






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Jan 3 at 11:44









      bgarcialbgarcial

      19511347




      19511347
























          1 Answer
          1






          active

          oldest

          votes


















          1














          Here's an article/guide on how to work with the limit-ranger and its default values [1]



          [1]https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b






          share|improve this answer
























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54021635%2fassigning-specific-cpu-resources-to-pod-kubernetes-io-limit-ranger-limitrang%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            Here's an article/guide on how to work with the limit-ranger and its default values [1]



            [1]https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b






            share|improve this answer




























              1














              Here's an article/guide on how to work with the limit-ranger and its default values [1]



              [1]https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b






              share|improve this answer


























                1












                1








                1







                Here's an article/guide on how to work with the limit-ranger and its default values [1]



                [1]https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b






                share|improve this answer













                Here's an article/guide on how to work with the limit-ranger and its default values [1]



                [1]https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Jan 3 at 19:35









                Germán A.Germán A.

                834




                834
































                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54021635%2fassigning-specific-cpu-resources-to-pod-kubernetes-io-limit-ranger-limitrang%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    MongoDB - Not Authorized To Execute Command

                    in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

                    How to fix TextFormField cause rebuild widget in Flutter