How to access the service deployed on one pod via another pod in Kubernetes?












0















Can anybody let me know how can we access the service deployed on one pod via another pod in a kubernetes cluster?



Example:



There is a nginx service which is deployed on Node1 (having pod name as nginx-12345) and another service which is deployed on Node2 (having pod name as service-23456). Now if 'service' wants to communicate with 'nginx' for some reason, then how can we access 'nginx' inside the 'service-23456' pod?










share|improve this question




















  • 1





    you are not explaining yourself properly. What is a 'service' for you? Kubernetes has flat networking by default, so all pods and nodes can talk to each other, no matter their namespaces.

    – suren
    Nov 21 '18 at 11:22











  • I meant about any random service. Services are just a mechanism of accessing the deployments . The comments in the below section clearly describe the issue. What I want to know is that if any service such as nginx is deployed on a pod (say pod 1) and another service named eureka is deployed on second pod (say pod 2), then how can we access nginx from pod 2 ? I am able to access services in the master server but I am not able to access in the corresponding pods.

    – Aditya Datta
    Nov 21 '18 at 12:29













  • OK. So, as I said, k8s networking is flat, so you should be able to talk from one pod to another. How did you create the cluster? If you followed any doc, can you paste is here?

    – suren
    Nov 21 '18 at 14:15











  • Hi Suren , I followed this link ---- howtoforge.com/tutorial/centos-kubernetes-docker-cluster

    – Aditya Datta
    Nov 22 '18 at 7:23
















0















Can anybody let me know how can we access the service deployed on one pod via another pod in a kubernetes cluster?



Example:



There is a nginx service which is deployed on Node1 (having pod name as nginx-12345) and another service which is deployed on Node2 (having pod name as service-23456). Now if 'service' wants to communicate with 'nginx' for some reason, then how can we access 'nginx' inside the 'service-23456' pod?










share|improve this question




















  • 1





    you are not explaining yourself properly. What is a 'service' for you? Kubernetes has flat networking by default, so all pods and nodes can talk to each other, no matter their namespaces.

    – suren
    Nov 21 '18 at 11:22











  • I meant about any random service. Services are just a mechanism of accessing the deployments . The comments in the below section clearly describe the issue. What I want to know is that if any service such as nginx is deployed on a pod (say pod 1) and another service named eureka is deployed on second pod (say pod 2), then how can we access nginx from pod 2 ? I am able to access services in the master server but I am not able to access in the corresponding pods.

    – Aditya Datta
    Nov 21 '18 at 12:29













  • OK. So, as I said, k8s networking is flat, so you should be able to talk from one pod to another. How did you create the cluster? If you followed any doc, can you paste is here?

    – suren
    Nov 21 '18 at 14:15











  • Hi Suren , I followed this link ---- howtoforge.com/tutorial/centos-kubernetes-docker-cluster

    – Aditya Datta
    Nov 22 '18 at 7:23














0












0








0








Can anybody let me know how can we access the service deployed on one pod via another pod in a kubernetes cluster?



Example:



There is a nginx service which is deployed on Node1 (having pod name as nginx-12345) and another service which is deployed on Node2 (having pod name as service-23456). Now if 'service' wants to communicate with 'nginx' for some reason, then how can we access 'nginx' inside the 'service-23456' pod?










share|improve this question
















Can anybody let me know how can we access the service deployed on one pod via another pod in a kubernetes cluster?



Example:



There is a nginx service which is deployed on Node1 (having pod name as nginx-12345) and another service which is deployed on Node2 (having pod name as service-23456). Now if 'service' wants to communicate with 'nginx' for some reason, then how can we access 'nginx' inside the 'service-23456' pod?







docker kubernetes kubectl kubelet






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 21 '18 at 13:06









suren

1,298515




1,298515










asked Nov 21 '18 at 6:16









Aditya DattaAditya Datta

468




468








  • 1





    you are not explaining yourself properly. What is a 'service' for you? Kubernetes has flat networking by default, so all pods and nodes can talk to each other, no matter their namespaces.

    – suren
    Nov 21 '18 at 11:22











  • I meant about any random service. Services are just a mechanism of accessing the deployments . The comments in the below section clearly describe the issue. What I want to know is that if any service such as nginx is deployed on a pod (say pod 1) and another service named eureka is deployed on second pod (say pod 2), then how can we access nginx from pod 2 ? I am able to access services in the master server but I am not able to access in the corresponding pods.

    – Aditya Datta
    Nov 21 '18 at 12:29













  • OK. So, as I said, k8s networking is flat, so you should be able to talk from one pod to another. How did you create the cluster? If you followed any doc, can you paste is here?

    – suren
    Nov 21 '18 at 14:15











  • Hi Suren , I followed this link ---- howtoforge.com/tutorial/centos-kubernetes-docker-cluster

    – Aditya Datta
    Nov 22 '18 at 7:23














  • 1





    you are not explaining yourself properly. What is a 'service' for you? Kubernetes has flat networking by default, so all pods and nodes can talk to each other, no matter their namespaces.

    – suren
    Nov 21 '18 at 11:22











  • I meant about any random service. Services are just a mechanism of accessing the deployments . The comments in the below section clearly describe the issue. What I want to know is that if any service such as nginx is deployed on a pod (say pod 1) and another service named eureka is deployed on second pod (say pod 2), then how can we access nginx from pod 2 ? I am able to access services in the master server but I am not able to access in the corresponding pods.

    – Aditya Datta
    Nov 21 '18 at 12:29













  • OK. So, as I said, k8s networking is flat, so you should be able to talk from one pod to another. How did you create the cluster? If you followed any doc, can you paste is here?

    – suren
    Nov 21 '18 at 14:15











  • Hi Suren , I followed this link ---- howtoforge.com/tutorial/centos-kubernetes-docker-cluster

    – Aditya Datta
    Nov 22 '18 at 7:23








1




1





you are not explaining yourself properly. What is a 'service' for you? Kubernetes has flat networking by default, so all pods and nodes can talk to each other, no matter their namespaces.

– suren
Nov 21 '18 at 11:22





you are not explaining yourself properly. What is a 'service' for you? Kubernetes has flat networking by default, so all pods and nodes can talk to each other, no matter their namespaces.

– suren
Nov 21 '18 at 11:22













I meant about any random service. Services are just a mechanism of accessing the deployments . The comments in the below section clearly describe the issue. What I want to know is that if any service such as nginx is deployed on a pod (say pod 1) and another service named eureka is deployed on second pod (say pod 2), then how can we access nginx from pod 2 ? I am able to access services in the master server but I am not able to access in the corresponding pods.

– Aditya Datta
Nov 21 '18 at 12:29







I meant about any random service. Services are just a mechanism of accessing the deployments . The comments in the below section clearly describe the issue. What I want to know is that if any service such as nginx is deployed on a pod (say pod 1) and another service named eureka is deployed on second pod (say pod 2), then how can we access nginx from pod 2 ? I am able to access services in the master server but I am not able to access in the corresponding pods.

– Aditya Datta
Nov 21 '18 at 12:29















OK. So, as I said, k8s networking is flat, so you should be able to talk from one pod to another. How did you create the cluster? If you followed any doc, can you paste is here?

– suren
Nov 21 '18 at 14:15





OK. So, as I said, k8s networking is flat, so you should be able to talk from one pod to another. How did you create the cluster? If you followed any doc, can you paste is here?

– suren
Nov 21 '18 at 14:15













Hi Suren , I followed this link ---- howtoforge.com/tutorial/centos-kubernetes-docker-cluster

– Aditya Datta
Nov 22 '18 at 7:23





Hi Suren , I followed this link ---- howtoforge.com/tutorial/centos-kubernetes-docker-cluster

– Aditya Datta
Nov 22 '18 at 7:23












3 Answers
3






active

oldest

votes


















2














There are various ways to access the service in kubernetes, you can expose your services through NodePort or LoadBalancer and access it outside the cluster.



See the official documentation of how to access the services.



Kubernetes official document states that:




Some clusters may allow you to ssh to a node in the cluster. From there you may be able to access cluster services. This is a non-standard method, and will work on some clusters but not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.




So access a service directly from other node is dependent on which type of Kubernetes cluster you're using.



EDIT:



Once the service is deployed in your cluster you should be able to contact the service using its name, and Kube-DNS will answer with the correct ClusterIP to speak to your final pods. ClusterIPs are governed by IPTables rules created by kube-proxy on Workers that NAT your request to the final container’s IP.



The Kube-DNS naming convention is service.namespace.svc.cluster-domain.tld and the default cluster domain is cluster.local.



For example, if you want to contact a service called mysql in the db namespace from any namespace, you can simply speak to mysql.db.svc.cluster.local.



If this is not working then there might be some issue with kube-dns in your cluster.
Hope this helps.



EDIT2 :
There are some known issue with dns resolution in ubuntu, Kubernetes official document states that




Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm 1.11 automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.







share|improve this answer


























  • Hi Praful , Sorry if my question was not clear. I have reframed the question. I actually want to access the service which is hosted in one pod via another pod . Is there any way we can do it ? I am using flannel network here .

    – Aditya Datta
    Nov 21 '18 at 6:47













  • Hi Praful , You are right regarding the DNS convention. I have deployed 'eureka' on one of the pods and from the master server, I am able to do nslookup and query DNS and it returns the result as the cluster IP for 'eureka-server.default.svc.cluster.local'. I fired command as 'nslookup eureka-server.default.svc.cluster.local 10.96.0.10' but when I am firing the same command from any of the pods, it returns as 'could not resolve host; although the name server for the pods is also set as 10.96.0.10 .

    – Aditya Datta
    Nov 21 '18 at 10:37











  • Hi Aditya, Following is the guide to debug the dns resolution, could you please check if everything is working fine on your end: kubernetes.io/docs/tasks/administer-cluster/…

    – Prafull Ladha
    Nov 21 '18 at 10:42













  • Hi Praful , I checked the article and found out everything is running. I am able to see DNS logs but only if I run the DNS query from the master server. When I am doing nslookup from the corresponding pods, then it is not able to connect and correspondingly no DNS logs are generated also. I think the pods are not referring '/etc/resolv.conf' file .

    – Aditya Datta
    Nov 22 '18 at 7:22











  • My master server is CentOS7 and both of my pods are Ubuntu systems. I think there might be an issue with Ubuntu .

    – Aditya Datta
    Nov 22 '18 at 7:28



















0














Did you expose your deployment as a service? If so, simply access it by it's dns name, like http://nginx-1234 - or if it's in a different namespace: http://nginx-1234.default.svc (change "default" to the namespace the service lives in) or http://nginx-1234.default.svc.cluster.local



Now if you did NOT expose a service, then you probably should. You don't need to expose it to the outside world, simply don't define a service type and it will only be available inside your cluster.



If for some reason you don't want to expose a service (can't think of any reason), you can query the api server for the pod IP. You will need to provide a token for authentication, but these are available inside the pod:



get the token:



TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)


call the api server:



curl https://kubernetes.default.svc/api/v1/namespaces/default/pods--silent 
--header "Authorization: Bearer $TOKEN" --insecure


you can refine your query by adding ?fieldSelector=spec.nodeName%3Dtargetnodename or similar (simply use a json path). the output can be parsed with https://stedolan.github.io/jq/ or any other JSON utility.






share|improve this answer
























  • Hi Markus, I exposed my deployment as a service and I am using NodePort for this. I hae deployed eureka on one of the nodes and when I do nslookup from the master server using '10.96.0.10' as the name server, it returns the correct result as the full FQDN name . But when I fire the nslookup command from any of the pods, it shows as 'Could not resolve host' although the '/etc/resolve.conf' file of the pod shows '10.96.0.10' as the name server .

    – Aditya Datta
    Nov 21 '18 at 10:40











  • i don't know about eureka, but normally you don't have to change /etc/resolve.conf for communicating between services in your cluster.

    – Markus Dresch
    Nov 21 '18 at 11:34



















0














Simlar question was answered here:
Kubernetes - How to acces to service from a web server in pod with a rest request



Just replace "ProductWebApp" with "nginx" and "DashboardWebApp" with "service".






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53406249%2fhow-to-access-the-service-deployed-on-one-pod-via-another-pod-in-kubernetes%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2














    There are various ways to access the service in kubernetes, you can expose your services through NodePort or LoadBalancer and access it outside the cluster.



    See the official documentation of how to access the services.



    Kubernetes official document states that:




    Some clusters may allow you to ssh to a node in the cluster. From there you may be able to access cluster services. This is a non-standard method, and will work on some clusters but not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.




    So access a service directly from other node is dependent on which type of Kubernetes cluster you're using.



    EDIT:



    Once the service is deployed in your cluster you should be able to contact the service using its name, and Kube-DNS will answer with the correct ClusterIP to speak to your final pods. ClusterIPs are governed by IPTables rules created by kube-proxy on Workers that NAT your request to the final container’s IP.



    The Kube-DNS naming convention is service.namespace.svc.cluster-domain.tld and the default cluster domain is cluster.local.



    For example, if you want to contact a service called mysql in the db namespace from any namespace, you can simply speak to mysql.db.svc.cluster.local.



    If this is not working then there might be some issue with kube-dns in your cluster.
    Hope this helps.



    EDIT2 :
    There are some known issue with dns resolution in ubuntu, Kubernetes official document states that




    Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm 1.11 automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.







    share|improve this answer


























    • Hi Praful , Sorry if my question was not clear. I have reframed the question. I actually want to access the service which is hosted in one pod via another pod . Is there any way we can do it ? I am using flannel network here .

      – Aditya Datta
      Nov 21 '18 at 6:47













    • Hi Praful , You are right regarding the DNS convention. I have deployed 'eureka' on one of the pods and from the master server, I am able to do nslookup and query DNS and it returns the result as the cluster IP for 'eureka-server.default.svc.cluster.local'. I fired command as 'nslookup eureka-server.default.svc.cluster.local 10.96.0.10' but when I am firing the same command from any of the pods, it returns as 'could not resolve host; although the name server for the pods is also set as 10.96.0.10 .

      – Aditya Datta
      Nov 21 '18 at 10:37











    • Hi Aditya, Following is the guide to debug the dns resolution, could you please check if everything is working fine on your end: kubernetes.io/docs/tasks/administer-cluster/…

      – Prafull Ladha
      Nov 21 '18 at 10:42













    • Hi Praful , I checked the article and found out everything is running. I am able to see DNS logs but only if I run the DNS query from the master server. When I am doing nslookup from the corresponding pods, then it is not able to connect and correspondingly no DNS logs are generated also. I think the pods are not referring '/etc/resolv.conf' file .

      – Aditya Datta
      Nov 22 '18 at 7:22











    • My master server is CentOS7 and both of my pods are Ubuntu systems. I think there might be an issue with Ubuntu .

      – Aditya Datta
      Nov 22 '18 at 7:28
















    2














    There are various ways to access the service in kubernetes, you can expose your services through NodePort or LoadBalancer and access it outside the cluster.



    See the official documentation of how to access the services.



    Kubernetes official document states that:




    Some clusters may allow you to ssh to a node in the cluster. From there you may be able to access cluster services. This is a non-standard method, and will work on some clusters but not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.




    So access a service directly from other node is dependent on which type of Kubernetes cluster you're using.



    EDIT:



    Once the service is deployed in your cluster you should be able to contact the service using its name, and Kube-DNS will answer with the correct ClusterIP to speak to your final pods. ClusterIPs are governed by IPTables rules created by kube-proxy on Workers that NAT your request to the final container’s IP.



    The Kube-DNS naming convention is service.namespace.svc.cluster-domain.tld and the default cluster domain is cluster.local.



    For example, if you want to contact a service called mysql in the db namespace from any namespace, you can simply speak to mysql.db.svc.cluster.local.



    If this is not working then there might be some issue with kube-dns in your cluster.
    Hope this helps.



    EDIT2 :
    There are some known issue with dns resolution in ubuntu, Kubernetes official document states that




    Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm 1.11 automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.







    share|improve this answer


























    • Hi Praful , Sorry if my question was not clear. I have reframed the question. I actually want to access the service which is hosted in one pod via another pod . Is there any way we can do it ? I am using flannel network here .

      – Aditya Datta
      Nov 21 '18 at 6:47













    • Hi Praful , You are right regarding the DNS convention. I have deployed 'eureka' on one of the pods and from the master server, I am able to do nslookup and query DNS and it returns the result as the cluster IP for 'eureka-server.default.svc.cluster.local'. I fired command as 'nslookup eureka-server.default.svc.cluster.local 10.96.0.10' but when I am firing the same command from any of the pods, it returns as 'could not resolve host; although the name server for the pods is also set as 10.96.0.10 .

      – Aditya Datta
      Nov 21 '18 at 10:37











    • Hi Aditya, Following is the guide to debug the dns resolution, could you please check if everything is working fine on your end: kubernetes.io/docs/tasks/administer-cluster/…

      – Prafull Ladha
      Nov 21 '18 at 10:42













    • Hi Praful , I checked the article and found out everything is running. I am able to see DNS logs but only if I run the DNS query from the master server. When I am doing nslookup from the corresponding pods, then it is not able to connect and correspondingly no DNS logs are generated also. I think the pods are not referring '/etc/resolv.conf' file .

      – Aditya Datta
      Nov 22 '18 at 7:22











    • My master server is CentOS7 and both of my pods are Ubuntu systems. I think there might be an issue with Ubuntu .

      – Aditya Datta
      Nov 22 '18 at 7:28














    2












    2








    2







    There are various ways to access the service in kubernetes, you can expose your services through NodePort or LoadBalancer and access it outside the cluster.



    See the official documentation of how to access the services.



    Kubernetes official document states that:




    Some clusters may allow you to ssh to a node in the cluster. From there you may be able to access cluster services. This is a non-standard method, and will work on some clusters but not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.




    So access a service directly from other node is dependent on which type of Kubernetes cluster you're using.



    EDIT:



    Once the service is deployed in your cluster you should be able to contact the service using its name, and Kube-DNS will answer with the correct ClusterIP to speak to your final pods. ClusterIPs are governed by IPTables rules created by kube-proxy on Workers that NAT your request to the final container’s IP.



    The Kube-DNS naming convention is service.namespace.svc.cluster-domain.tld and the default cluster domain is cluster.local.



    For example, if you want to contact a service called mysql in the db namespace from any namespace, you can simply speak to mysql.db.svc.cluster.local.



    If this is not working then there might be some issue with kube-dns in your cluster.
    Hope this helps.



    EDIT2 :
    There are some known issue with dns resolution in ubuntu, Kubernetes official document states that




    Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm 1.11 automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.







    share|improve this answer















    There are various ways to access the service in kubernetes, you can expose your services through NodePort or LoadBalancer and access it outside the cluster.



    See the official documentation of how to access the services.



    Kubernetes official document states that:




    Some clusters may allow you to ssh to a node in the cluster. From there you may be able to access cluster services. This is a non-standard method, and will work on some clusters but not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.




    So access a service directly from other node is dependent on which type of Kubernetes cluster you're using.



    EDIT:



    Once the service is deployed in your cluster you should be able to contact the service using its name, and Kube-DNS will answer with the correct ClusterIP to speak to your final pods. ClusterIPs are governed by IPTables rules created by kube-proxy on Workers that NAT your request to the final container’s IP.



    The Kube-DNS naming convention is service.namespace.svc.cluster-domain.tld and the default cluster domain is cluster.local.



    For example, if you want to contact a service called mysql in the db namespace from any namespace, you can simply speak to mysql.db.svc.cluster.local.



    If this is not working then there might be some issue with kube-dns in your cluster.
    Hope this helps.



    EDIT2 :
    There are some known issue with dns resolution in ubuntu, Kubernetes official document states that




    Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm 1.11 automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.








    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Nov 22 '18 at 10:46

























    answered Nov 21 '18 at 6:34









    Prafull LadhaPrafull Ladha

    3,195320




    3,195320













    • Hi Praful , Sorry if my question was not clear. I have reframed the question. I actually want to access the service which is hosted in one pod via another pod . Is there any way we can do it ? I am using flannel network here .

      – Aditya Datta
      Nov 21 '18 at 6:47













    • Hi Praful , You are right regarding the DNS convention. I have deployed 'eureka' on one of the pods and from the master server, I am able to do nslookup and query DNS and it returns the result as the cluster IP for 'eureka-server.default.svc.cluster.local'. I fired command as 'nslookup eureka-server.default.svc.cluster.local 10.96.0.10' but when I am firing the same command from any of the pods, it returns as 'could not resolve host; although the name server for the pods is also set as 10.96.0.10 .

      – Aditya Datta
      Nov 21 '18 at 10:37











    • Hi Aditya, Following is the guide to debug the dns resolution, could you please check if everything is working fine on your end: kubernetes.io/docs/tasks/administer-cluster/…

      – Prafull Ladha
      Nov 21 '18 at 10:42













    • Hi Praful , I checked the article and found out everything is running. I am able to see DNS logs but only if I run the DNS query from the master server. When I am doing nslookup from the corresponding pods, then it is not able to connect and correspondingly no DNS logs are generated also. I think the pods are not referring '/etc/resolv.conf' file .

      – Aditya Datta
      Nov 22 '18 at 7:22











    • My master server is CentOS7 and both of my pods are Ubuntu systems. I think there might be an issue with Ubuntu .

      – Aditya Datta
      Nov 22 '18 at 7:28



















    • Hi Praful , Sorry if my question was not clear. I have reframed the question. I actually want to access the service which is hosted in one pod via another pod . Is there any way we can do it ? I am using flannel network here .

      – Aditya Datta
      Nov 21 '18 at 6:47













    • Hi Praful , You are right regarding the DNS convention. I have deployed 'eureka' on one of the pods and from the master server, I am able to do nslookup and query DNS and it returns the result as the cluster IP for 'eureka-server.default.svc.cluster.local'. I fired command as 'nslookup eureka-server.default.svc.cluster.local 10.96.0.10' but when I am firing the same command from any of the pods, it returns as 'could not resolve host; although the name server for the pods is also set as 10.96.0.10 .

      – Aditya Datta
      Nov 21 '18 at 10:37











    • Hi Aditya, Following is the guide to debug the dns resolution, could you please check if everything is working fine on your end: kubernetes.io/docs/tasks/administer-cluster/…

      – Prafull Ladha
      Nov 21 '18 at 10:42













    • Hi Praful , I checked the article and found out everything is running. I am able to see DNS logs but only if I run the DNS query from the master server. When I am doing nslookup from the corresponding pods, then it is not able to connect and correspondingly no DNS logs are generated also. I think the pods are not referring '/etc/resolv.conf' file .

      – Aditya Datta
      Nov 22 '18 at 7:22











    • My master server is CentOS7 and both of my pods are Ubuntu systems. I think there might be an issue with Ubuntu .

      – Aditya Datta
      Nov 22 '18 at 7:28

















    Hi Praful , Sorry if my question was not clear. I have reframed the question. I actually want to access the service which is hosted in one pod via another pod . Is there any way we can do it ? I am using flannel network here .

    – Aditya Datta
    Nov 21 '18 at 6:47







    Hi Praful , Sorry if my question was not clear. I have reframed the question. I actually want to access the service which is hosted in one pod via another pod . Is there any way we can do it ? I am using flannel network here .

    – Aditya Datta
    Nov 21 '18 at 6:47















    Hi Praful , You are right regarding the DNS convention. I have deployed 'eureka' on one of the pods and from the master server, I am able to do nslookup and query DNS and it returns the result as the cluster IP for 'eureka-server.default.svc.cluster.local'. I fired command as 'nslookup eureka-server.default.svc.cluster.local 10.96.0.10' but when I am firing the same command from any of the pods, it returns as 'could not resolve host; although the name server for the pods is also set as 10.96.0.10 .

    – Aditya Datta
    Nov 21 '18 at 10:37





    Hi Praful , You are right regarding the DNS convention. I have deployed 'eureka' on one of the pods and from the master server, I am able to do nslookup and query DNS and it returns the result as the cluster IP for 'eureka-server.default.svc.cluster.local'. I fired command as 'nslookup eureka-server.default.svc.cluster.local 10.96.0.10' but when I am firing the same command from any of the pods, it returns as 'could not resolve host; although the name server for the pods is also set as 10.96.0.10 .

    – Aditya Datta
    Nov 21 '18 at 10:37













    Hi Aditya, Following is the guide to debug the dns resolution, could you please check if everything is working fine on your end: kubernetes.io/docs/tasks/administer-cluster/…

    – Prafull Ladha
    Nov 21 '18 at 10:42







    Hi Aditya, Following is the guide to debug the dns resolution, could you please check if everything is working fine on your end: kubernetes.io/docs/tasks/administer-cluster/…

    – Prafull Ladha
    Nov 21 '18 at 10:42















    Hi Praful , I checked the article and found out everything is running. I am able to see DNS logs but only if I run the DNS query from the master server. When I am doing nslookup from the corresponding pods, then it is not able to connect and correspondingly no DNS logs are generated also. I think the pods are not referring '/etc/resolv.conf' file .

    – Aditya Datta
    Nov 22 '18 at 7:22





    Hi Praful , I checked the article and found out everything is running. I am able to see DNS logs but only if I run the DNS query from the master server. When I am doing nslookup from the corresponding pods, then it is not able to connect and correspondingly no DNS logs are generated also. I think the pods are not referring '/etc/resolv.conf' file .

    – Aditya Datta
    Nov 22 '18 at 7:22













    My master server is CentOS7 and both of my pods are Ubuntu systems. I think there might be an issue with Ubuntu .

    – Aditya Datta
    Nov 22 '18 at 7:28





    My master server is CentOS7 and both of my pods are Ubuntu systems. I think there might be an issue with Ubuntu .

    – Aditya Datta
    Nov 22 '18 at 7:28













    0














    Did you expose your deployment as a service? If so, simply access it by it's dns name, like http://nginx-1234 - or if it's in a different namespace: http://nginx-1234.default.svc (change "default" to the namespace the service lives in) or http://nginx-1234.default.svc.cluster.local



    Now if you did NOT expose a service, then you probably should. You don't need to expose it to the outside world, simply don't define a service type and it will only be available inside your cluster.



    If for some reason you don't want to expose a service (can't think of any reason), you can query the api server for the pod IP. You will need to provide a token for authentication, but these are available inside the pod:



    get the token:



    TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)


    call the api server:



    curl https://kubernetes.default.svc/api/v1/namespaces/default/pods--silent 
    --header "Authorization: Bearer $TOKEN" --insecure


    you can refine your query by adding ?fieldSelector=spec.nodeName%3Dtargetnodename or similar (simply use a json path). the output can be parsed with https://stedolan.github.io/jq/ or any other JSON utility.






    share|improve this answer
























    • Hi Markus, I exposed my deployment as a service and I am using NodePort for this. I hae deployed eureka on one of the nodes and when I do nslookup from the master server using '10.96.0.10' as the name server, it returns the correct result as the full FQDN name . But when I fire the nslookup command from any of the pods, it shows as 'Could not resolve host' although the '/etc/resolve.conf' file of the pod shows '10.96.0.10' as the name server .

      – Aditya Datta
      Nov 21 '18 at 10:40











    • i don't know about eureka, but normally you don't have to change /etc/resolve.conf for communicating between services in your cluster.

      – Markus Dresch
      Nov 21 '18 at 11:34
















    0














    Did you expose your deployment as a service? If so, simply access it by it's dns name, like http://nginx-1234 - or if it's in a different namespace: http://nginx-1234.default.svc (change "default" to the namespace the service lives in) or http://nginx-1234.default.svc.cluster.local



    Now if you did NOT expose a service, then you probably should. You don't need to expose it to the outside world, simply don't define a service type and it will only be available inside your cluster.



    If for some reason you don't want to expose a service (can't think of any reason), you can query the api server for the pod IP. You will need to provide a token for authentication, but these are available inside the pod:



    get the token:



    TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)


    call the api server:



    curl https://kubernetes.default.svc/api/v1/namespaces/default/pods--silent 
    --header "Authorization: Bearer $TOKEN" --insecure


    you can refine your query by adding ?fieldSelector=spec.nodeName%3Dtargetnodename or similar (simply use a json path). the output can be parsed with https://stedolan.github.io/jq/ or any other JSON utility.






    share|improve this answer
























    • Hi Markus, I exposed my deployment as a service and I am using NodePort for this. I hae deployed eureka on one of the nodes and when I do nslookup from the master server using '10.96.0.10' as the name server, it returns the correct result as the full FQDN name . But when I fire the nslookup command from any of the pods, it shows as 'Could not resolve host' although the '/etc/resolve.conf' file of the pod shows '10.96.0.10' as the name server .

      – Aditya Datta
      Nov 21 '18 at 10:40











    • i don't know about eureka, but normally you don't have to change /etc/resolve.conf for communicating between services in your cluster.

      – Markus Dresch
      Nov 21 '18 at 11:34














    0












    0








    0







    Did you expose your deployment as a service? If so, simply access it by it's dns name, like http://nginx-1234 - or if it's in a different namespace: http://nginx-1234.default.svc (change "default" to the namespace the service lives in) or http://nginx-1234.default.svc.cluster.local



    Now if you did NOT expose a service, then you probably should. You don't need to expose it to the outside world, simply don't define a service type and it will only be available inside your cluster.



    If for some reason you don't want to expose a service (can't think of any reason), you can query the api server for the pod IP. You will need to provide a token for authentication, but these are available inside the pod:



    get the token:



    TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)


    call the api server:



    curl https://kubernetes.default.svc/api/v1/namespaces/default/pods--silent 
    --header "Authorization: Bearer $TOKEN" --insecure


    you can refine your query by adding ?fieldSelector=spec.nodeName%3Dtargetnodename or similar (simply use a json path). the output can be parsed with https://stedolan.github.io/jq/ or any other JSON utility.






    share|improve this answer













    Did you expose your deployment as a service? If so, simply access it by it's dns name, like http://nginx-1234 - or if it's in a different namespace: http://nginx-1234.default.svc (change "default" to the namespace the service lives in) or http://nginx-1234.default.svc.cluster.local



    Now if you did NOT expose a service, then you probably should. You don't need to expose it to the outside world, simply don't define a service type and it will only be available inside your cluster.



    If for some reason you don't want to expose a service (can't think of any reason), you can query the api server for the pod IP. You will need to provide a token for authentication, but these are available inside the pod:



    get the token:



    TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)


    call the api server:



    curl https://kubernetes.default.svc/api/v1/namespaces/default/pods--silent 
    --header "Authorization: Bearer $TOKEN" --insecure


    you can refine your query by adding ?fieldSelector=spec.nodeName%3Dtargetnodename or similar (simply use a json path). the output can be parsed with https://stedolan.github.io/jq/ or any other JSON utility.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Nov 21 '18 at 7:06









    Markus DreschMarkus Dresch

    1,610318




    1,610318













    • Hi Markus, I exposed my deployment as a service and I am using NodePort for this. I hae deployed eureka on one of the nodes and when I do nslookup from the master server using '10.96.0.10' as the name server, it returns the correct result as the full FQDN name . But when I fire the nslookup command from any of the pods, it shows as 'Could not resolve host' although the '/etc/resolve.conf' file of the pod shows '10.96.0.10' as the name server .

      – Aditya Datta
      Nov 21 '18 at 10:40











    • i don't know about eureka, but normally you don't have to change /etc/resolve.conf for communicating between services in your cluster.

      – Markus Dresch
      Nov 21 '18 at 11:34



















    • Hi Markus, I exposed my deployment as a service and I am using NodePort for this. I hae deployed eureka on one of the nodes and when I do nslookup from the master server using '10.96.0.10' as the name server, it returns the correct result as the full FQDN name . But when I fire the nslookup command from any of the pods, it shows as 'Could not resolve host' although the '/etc/resolve.conf' file of the pod shows '10.96.0.10' as the name server .

      – Aditya Datta
      Nov 21 '18 at 10:40











    • i don't know about eureka, but normally you don't have to change /etc/resolve.conf for communicating between services in your cluster.

      – Markus Dresch
      Nov 21 '18 at 11:34

















    Hi Markus, I exposed my deployment as a service and I am using NodePort for this. I hae deployed eureka on one of the nodes and when I do nslookup from the master server using '10.96.0.10' as the name server, it returns the correct result as the full FQDN name . But when I fire the nslookup command from any of the pods, it shows as 'Could not resolve host' although the '/etc/resolve.conf' file of the pod shows '10.96.0.10' as the name server .

    – Aditya Datta
    Nov 21 '18 at 10:40





    Hi Markus, I exposed my deployment as a service and I am using NodePort for this. I hae deployed eureka on one of the nodes and when I do nslookup from the master server using '10.96.0.10' as the name server, it returns the correct result as the full FQDN name . But when I fire the nslookup command from any of the pods, it shows as 'Could not resolve host' although the '/etc/resolve.conf' file of the pod shows '10.96.0.10' as the name server .

    – Aditya Datta
    Nov 21 '18 at 10:40













    i don't know about eureka, but normally you don't have to change /etc/resolve.conf for communicating between services in your cluster.

    – Markus Dresch
    Nov 21 '18 at 11:34





    i don't know about eureka, but normally you don't have to change /etc/resolve.conf for communicating between services in your cluster.

    – Markus Dresch
    Nov 21 '18 at 11:34











    0














    Simlar question was answered here:
    Kubernetes - How to acces to service from a web server in pod with a rest request



    Just replace "ProductWebApp" with "nginx" and "DashboardWebApp" with "service".






    share|improve this answer




























      0














      Simlar question was answered here:
      Kubernetes - How to acces to service from a web server in pod with a rest request



      Just replace "ProductWebApp" with "nginx" and "DashboardWebApp" with "service".






      share|improve this answer


























        0












        0








        0







        Simlar question was answered here:
        Kubernetes - How to acces to service from a web server in pod with a rest request



        Just replace "ProductWebApp" with "nginx" and "DashboardWebApp" with "service".






        share|improve this answer













        Simlar question was answered here:
        Kubernetes - How to acces to service from a web server in pod with a rest request



        Just replace "ProductWebApp" with "nginx" and "DashboardWebApp" with "service".







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 21 '18 at 22:09









        apisimapisim

        4826




        4826






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53406249%2fhow-to-access-the-service-deployed-on-one-pod-via-another-pod-in-kubernetes%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            MongoDB - Not Authorized To Execute Command

            How to fix TextFormField cause rebuild widget in Flutter

            in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith