How to monitor connection in local network












0















I have a ton of services: Node(s), MySQL(s), Redis(s), Elastic(s)...

I want to monitor how they connect to each other: Connection rate, Number alive connection... (Node1 create 30 connection to Node2/MySQL/Redis per second...) like Haproxy stat image attached below.



Haproxy stat



Currently i have two options:




  • Haproxy (proxy): I want to use single service Haproxy to archive this but it's seem very hard to use ALC detect what connection need forward to what service.

  • ELK (log center): I need to create log files on each service (Node, MySQL, Redis...) and then show them on the log center. I see that a ton of works to do that without built-in feature like Haproxy stat page.


How to do this? Is log center good in this case?










share|improve this question























  • I work with such environment and I use ElasticSearch for application logs (Elastic is good for full text search engine) and Prometheus for metrics collection and metric analytics specific for monitoring and alerting. After much soul searching and testing, this is my recommendation.

    – Alexandre Juma
    Nov 27 '18 at 11:32











  • Looking at your history of questions, you have accepted none of them. Please refer to What to do when someone answers

    – Alexandre Juma
    Nov 30 '18 at 10:50


















0















I have a ton of services: Node(s), MySQL(s), Redis(s), Elastic(s)...

I want to monitor how they connect to each other: Connection rate, Number alive connection... (Node1 create 30 connection to Node2/MySQL/Redis per second...) like Haproxy stat image attached below.



Haproxy stat



Currently i have two options:




  • Haproxy (proxy): I want to use single service Haproxy to archive this but it's seem very hard to use ALC detect what connection need forward to what service.

  • ELK (log center): I need to create log files on each service (Node, MySQL, Redis...) and then show them on the log center. I see that a ton of works to do that without built-in feature like Haproxy stat page.


How to do this? Is log center good in this case?










share|improve this question























  • I work with such environment and I use ElasticSearch for application logs (Elastic is good for full text search engine) and Prometheus for metrics collection and metric analytics specific for monitoring and alerting. After much soul searching and testing, this is my recommendation.

    – Alexandre Juma
    Nov 27 '18 at 11:32











  • Looking at your history of questions, you have accepted none of them. Please refer to What to do when someone answers

    – Alexandre Juma
    Nov 30 '18 at 10:50
















0












0








0


1






I have a ton of services: Node(s), MySQL(s), Redis(s), Elastic(s)...

I want to monitor how they connect to each other: Connection rate, Number alive connection... (Node1 create 30 connection to Node2/MySQL/Redis per second...) like Haproxy stat image attached below.



Haproxy stat



Currently i have two options:




  • Haproxy (proxy): I want to use single service Haproxy to archive this but it's seem very hard to use ALC detect what connection need forward to what service.

  • ELK (log center): I need to create log files on each service (Node, MySQL, Redis...) and then show them on the log center. I see that a ton of works to do that without built-in feature like Haproxy stat page.


How to do this? Is log center good in this case?










share|improve this question














I have a ton of services: Node(s), MySQL(s), Redis(s), Elastic(s)...

I want to monitor how they connect to each other: Connection rate, Number alive connection... (Node1 create 30 connection to Node2/MySQL/Redis per second...) like Haproxy stat image attached below.



Haproxy stat



Currently i have two options:




  • Haproxy (proxy): I want to use single service Haproxy to archive this but it's seem very hard to use ALC detect what connection need forward to what service.

  • ELK (log center): I need to create log files on each service (Node, MySQL, Redis...) and then show them on the log center. I see that a ton of works to do that without built-in feature like Haproxy stat page.


How to do this? Is log center good in this case?







microservices haproxy monitor






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 21 '18 at 7:44









Đinh Anh HuyĐinh Anh Huy

837




837













  • I work with such environment and I use ElasticSearch for application logs (Elastic is good for full text search engine) and Prometheus for metrics collection and metric analytics specific for monitoring and alerting. After much soul searching and testing, this is my recommendation.

    – Alexandre Juma
    Nov 27 '18 at 11:32











  • Looking at your history of questions, you have accepted none of them. Please refer to What to do when someone answers

    – Alexandre Juma
    Nov 30 '18 at 10:50





















  • I work with such environment and I use ElasticSearch for application logs (Elastic is good for full text search engine) and Prometheus for metrics collection and metric analytics specific for monitoring and alerting. After much soul searching and testing, this is my recommendation.

    – Alexandre Juma
    Nov 27 '18 at 11:32











  • Looking at your history of questions, you have accepted none of them. Please refer to What to do when someone answers

    – Alexandre Juma
    Nov 30 '18 at 10:50



















I work with such environment and I use ElasticSearch for application logs (Elastic is good for full text search engine) and Prometheus for metrics collection and metric analytics specific for monitoring and alerting. After much soul searching and testing, this is my recommendation.

– Alexandre Juma
Nov 27 '18 at 11:32





I work with such environment and I use ElasticSearch for application logs (Elastic is good for full text search engine) and Prometheus for metrics collection and metric analytics specific for monitoring and alerting. After much soul searching and testing, this is my recommendation.

– Alexandre Juma
Nov 27 '18 at 11:32













Looking at your history of questions, you have accepted none of them. Please refer to What to do when someone answers

– Alexandre Juma
Nov 30 '18 at 10:50







Looking at your history of questions, you have accepted none of them. Please refer to What to do when someone answers

– Alexandre Juma
Nov 30 '18 at 10:50














2 Answers
2






active

oldest

votes


















1














The problem



I think your problem is not collecting and pipelining the statistics to Elasticsearch, but instead the ton of work extracting metrics from your services because most of them do not have metric files/logs.



You'd then need to export them with some custom script, log them and capture it with filebeat, stream to a logstash for text processing and metric extraction so they are indexed in a way you can do some sort of analytics, and then send it to elasticsearch.



My take on the answer



At least for the 3 services you've referenced, there are Prometheus exporters readily available and you can find them here. The exporters are simple processes that will query your services native statistics APIs and expose a prometheus metric API for Prometheus to Scrape (poll).



After you have Prometheus scraping the metrics, you can display them in dashboards via Grafana (which is the de facto visualization layer for Prometheus) or bulk export your metrics to wherever you want (Elasticsearch, etc..) for visualization and exploration.



Conclusion



The benefits of this approach:




  1. Prometheus can auto-discover new nodes you add to your networks

  2. Readily available exporters from haproxy, redis and mysql for
    Prometheus

  3. No code needed, each exporter requires minimal
    configuration specific to each monitored technology, it can easily
    be containerized and deployed if your environment is container
    oriented, otherwise you just need to run each exporter in the
    correct machines

  4. Prometheus is very, very easy to deploy






share|improve this answer































    1














    Use ELK - elasticsearch logstash and kibana stack with filebeat. Filebeat -will share the log file content with logstash
    Logstash-will scan, filter and share the needed content to elastic search
    Elasticsearch- will work as a db, store the content from logstash in json format as documents.
    Kibana- with kibana you can search the required info. Also you can plot graphs and other visuals with the relevant data.






    share|improve this answer























      Your Answer






      StackExchange.ifUsing("editor", function () {
      StackExchange.using("externalEditor", function () {
      StackExchange.using("snippets", function () {
      StackExchange.snippets.init();
      });
      });
      }, "code-snippets");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "1"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53407350%2fhow-to-monitor-connection-in-local-network%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      1














      The problem



      I think your problem is not collecting and pipelining the statistics to Elasticsearch, but instead the ton of work extracting metrics from your services because most of them do not have metric files/logs.



      You'd then need to export them with some custom script, log them and capture it with filebeat, stream to a logstash for text processing and metric extraction so they are indexed in a way you can do some sort of analytics, and then send it to elasticsearch.



      My take on the answer



      At least for the 3 services you've referenced, there are Prometheus exporters readily available and you can find them here. The exporters are simple processes that will query your services native statistics APIs and expose a prometheus metric API for Prometheus to Scrape (poll).



      After you have Prometheus scraping the metrics, you can display them in dashboards via Grafana (which is the de facto visualization layer for Prometheus) or bulk export your metrics to wherever you want (Elasticsearch, etc..) for visualization and exploration.



      Conclusion



      The benefits of this approach:




      1. Prometheus can auto-discover new nodes you add to your networks

      2. Readily available exporters from haproxy, redis and mysql for
        Prometheus

      3. No code needed, each exporter requires minimal
        configuration specific to each monitored technology, it can easily
        be containerized and deployed if your environment is container
        oriented, otherwise you just need to run each exporter in the
        correct machines

      4. Prometheus is very, very easy to deploy






      share|improve this answer




























        1














        The problem



        I think your problem is not collecting and pipelining the statistics to Elasticsearch, but instead the ton of work extracting metrics from your services because most of them do not have metric files/logs.



        You'd then need to export them with some custom script, log them and capture it with filebeat, stream to a logstash for text processing and metric extraction so they are indexed in a way you can do some sort of analytics, and then send it to elasticsearch.



        My take on the answer



        At least for the 3 services you've referenced, there are Prometheus exporters readily available and you can find them here. The exporters are simple processes that will query your services native statistics APIs and expose a prometheus metric API for Prometheus to Scrape (poll).



        After you have Prometheus scraping the metrics, you can display them in dashboards via Grafana (which is the de facto visualization layer for Prometheus) or bulk export your metrics to wherever you want (Elasticsearch, etc..) for visualization and exploration.



        Conclusion



        The benefits of this approach:




        1. Prometheus can auto-discover new nodes you add to your networks

        2. Readily available exporters from haproxy, redis and mysql for
          Prometheus

        3. No code needed, each exporter requires minimal
          configuration specific to each monitored technology, it can easily
          be containerized and deployed if your environment is container
          oriented, otherwise you just need to run each exporter in the
          correct machines

        4. Prometheus is very, very easy to deploy






        share|improve this answer


























          1












          1








          1







          The problem



          I think your problem is not collecting and pipelining the statistics to Elasticsearch, but instead the ton of work extracting metrics from your services because most of them do not have metric files/logs.



          You'd then need to export them with some custom script, log them and capture it with filebeat, stream to a logstash for text processing and metric extraction so they are indexed in a way you can do some sort of analytics, and then send it to elasticsearch.



          My take on the answer



          At least for the 3 services you've referenced, there are Prometheus exporters readily available and you can find them here. The exporters are simple processes that will query your services native statistics APIs and expose a prometheus metric API for Prometheus to Scrape (poll).



          After you have Prometheus scraping the metrics, you can display them in dashboards via Grafana (which is the de facto visualization layer for Prometheus) or bulk export your metrics to wherever you want (Elasticsearch, etc..) for visualization and exploration.



          Conclusion



          The benefits of this approach:




          1. Prometheus can auto-discover new nodes you add to your networks

          2. Readily available exporters from haproxy, redis and mysql for
            Prometheus

          3. No code needed, each exporter requires minimal
            configuration specific to each monitored technology, it can easily
            be containerized and deployed if your environment is container
            oriented, otherwise you just need to run each exporter in the
            correct machines

          4. Prometheus is very, very easy to deploy






          share|improve this answer













          The problem



          I think your problem is not collecting and pipelining the statistics to Elasticsearch, but instead the ton of work extracting metrics from your services because most of them do not have metric files/logs.



          You'd then need to export them with some custom script, log them and capture it with filebeat, stream to a logstash for text processing and metric extraction so they are indexed in a way you can do some sort of analytics, and then send it to elasticsearch.



          My take on the answer



          At least for the 3 services you've referenced, there are Prometheus exporters readily available and you can find them here. The exporters are simple processes that will query your services native statistics APIs and expose a prometheus metric API for Prometheus to Scrape (poll).



          After you have Prometheus scraping the metrics, you can display them in dashboards via Grafana (which is the de facto visualization layer for Prometheus) or bulk export your metrics to wherever you want (Elasticsearch, etc..) for visualization and exploration.



          Conclusion



          The benefits of this approach:




          1. Prometheus can auto-discover new nodes you add to your networks

          2. Readily available exporters from haproxy, redis and mysql for
            Prometheus

          3. No code needed, each exporter requires minimal
            configuration specific to each monitored technology, it can easily
            be containerized and deployed if your environment is container
            oriented, otherwise you just need to run each exporter in the
            correct machines

          4. Prometheus is very, very easy to deploy







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 27 '18 at 11:22









          Alexandre JumaAlexandre Juma

          544214




          544214

























              1














              Use ELK - elasticsearch logstash and kibana stack with filebeat. Filebeat -will share the log file content with logstash
              Logstash-will scan, filter and share the needed content to elastic search
              Elasticsearch- will work as a db, store the content from logstash in json format as documents.
              Kibana- with kibana you can search the required info. Also you can plot graphs and other visuals with the relevant data.






              share|improve this answer




























                1














                Use ELK - elasticsearch logstash and kibana stack with filebeat. Filebeat -will share the log file content with logstash
                Logstash-will scan, filter and share the needed content to elastic search
                Elasticsearch- will work as a db, store the content from logstash in json format as documents.
                Kibana- with kibana you can search the required info. Also you can plot graphs and other visuals with the relevant data.






                share|improve this answer


























                  1












                  1








                  1







                  Use ELK - elasticsearch logstash and kibana stack with filebeat. Filebeat -will share the log file content with logstash
                  Logstash-will scan, filter and share the needed content to elastic search
                  Elasticsearch- will work as a db, store the content from logstash in json format as documents.
                  Kibana- with kibana you can search the required info. Also you can plot graphs and other visuals with the relevant data.






                  share|improve this answer













                  Use ELK - elasticsearch logstash and kibana stack with filebeat. Filebeat -will share the log file content with logstash
                  Logstash-will scan, filter and share the needed content to elastic search
                  Elasticsearch- will work as a db, store the content from logstash in json format as documents.
                  Kibana- with kibana you can search the required info. Also you can plot graphs and other visuals with the relevant data.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 26 '18 at 16:10









                  Vineet SharmaVineet Sharma

                  514




                  514






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Stack Overflow!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53407350%2fhow-to-monitor-connection-in-local-network%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      MongoDB - Not Authorized To Execute Command

                      Npm cannot find a required file even through it is in the searched directory

                      in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith