Writing OpenTSDB to Bigtable with HTTP POST not working (using Kubernetes(












1
















Updated with more information




I am trying to set up OpenTSDB on Bigtable, following this guide:
https://cloud.google.com/solutions/opentsdb-cloud-platform



Works well, all good.



Now I was trying to open the opentsdb-write service with a LoadBalancer (type). Seems to work well, too.



Note: using a GCP load balancer.



I am then using insomnia to send a POST to the ./api/put endpoint - and I get a 204 as expected (also, using the ?details shows no errors, neither does the ?sync) (see http://opentsdb.net/docs/build/html/api_http/put.html)



When querying the data (GET on ./api/query), I don't see the data (same effect in grafana). Also, I do not see any data added in the tsdb table in bigtable.



My conclusion: no data is written to Bigtable, although tsd is returning 204.



Interesting fact: the metric is created (I can see it in Bigtable (cbt read tsdb-uid) and also the autocomplete in the opentsdb-ui (and grafana) pick the metric up right away. But no data.



When I use the Heapster-Example as in the tutorial, it all works.



And the interesting part (to me):



NOTE: It happened a few times, with massive delay or after stoping/restarting the kubernetes cluster, that the data appeared. Suddenly. I could not reproduce as of now.



I must be missing something really simple.



Note: I don't see any errors in the logs (stackdriver) and UI (opentsdb UI), neither bigtable, nor Kubernetes, nor anything I can think of.



Note: the configs I am using are as linked in the tutorial.



The put I am using (see the 204):



enter image description here



and if I add ?details, it indicates success:



enter image description here










share|improve this question




















  • 1





    We had the somewhat same error and found that when writing a metric around 25 times it somehow magically appears. But we haven't found the issue yet.

    – dbanck
    Jan 2 at 16:31











  • thanks for the comment @dbanck - I just found the same, must be held back somewhere. As soon as it has some 10 datapoints, it writes them to bigtable. So far, I couldn't see (1.) that data is lost in the process and (2.) how to solve it. Will investigate some more

    – Pinguin Dirk
    Jan 2 at 16:34
















1
















Updated with more information




I am trying to set up OpenTSDB on Bigtable, following this guide:
https://cloud.google.com/solutions/opentsdb-cloud-platform



Works well, all good.



Now I was trying to open the opentsdb-write service with a LoadBalancer (type). Seems to work well, too.



Note: using a GCP load balancer.



I am then using insomnia to send a POST to the ./api/put endpoint - and I get a 204 as expected (also, using the ?details shows no errors, neither does the ?sync) (see http://opentsdb.net/docs/build/html/api_http/put.html)



When querying the data (GET on ./api/query), I don't see the data (same effect in grafana). Also, I do not see any data added in the tsdb table in bigtable.



My conclusion: no data is written to Bigtable, although tsd is returning 204.



Interesting fact: the metric is created (I can see it in Bigtable (cbt read tsdb-uid) and also the autocomplete in the opentsdb-ui (and grafana) pick the metric up right away. But no data.



When I use the Heapster-Example as in the tutorial, it all works.



And the interesting part (to me):



NOTE: It happened a few times, with massive delay or after stoping/restarting the kubernetes cluster, that the data appeared. Suddenly. I could not reproduce as of now.



I must be missing something really simple.



Note: I don't see any errors in the logs (stackdriver) and UI (opentsdb UI), neither bigtable, nor Kubernetes, nor anything I can think of.



Note: the configs I am using are as linked in the tutorial.



The put I am using (see the 204):



enter image description here



and if I add ?details, it indicates success:



enter image description here










share|improve this question




















  • 1





    We had the somewhat same error and found that when writing a metric around 25 times it somehow magically appears. But we haven't found the issue yet.

    – dbanck
    Jan 2 at 16:31











  • thanks for the comment @dbanck - I just found the same, must be held back somewhere. As soon as it has some 10 datapoints, it writes them to bigtable. So far, I couldn't see (1.) that data is lost in the process and (2.) how to solve it. Will investigate some more

    – Pinguin Dirk
    Jan 2 at 16:34














1












1








1


2







Updated with more information




I am trying to set up OpenTSDB on Bigtable, following this guide:
https://cloud.google.com/solutions/opentsdb-cloud-platform



Works well, all good.



Now I was trying to open the opentsdb-write service with a LoadBalancer (type). Seems to work well, too.



Note: using a GCP load balancer.



I am then using insomnia to send a POST to the ./api/put endpoint - and I get a 204 as expected (also, using the ?details shows no errors, neither does the ?sync) (see http://opentsdb.net/docs/build/html/api_http/put.html)



When querying the data (GET on ./api/query), I don't see the data (same effect in grafana). Also, I do not see any data added in the tsdb table in bigtable.



My conclusion: no data is written to Bigtable, although tsd is returning 204.



Interesting fact: the metric is created (I can see it in Bigtable (cbt read tsdb-uid) and also the autocomplete in the opentsdb-ui (and grafana) pick the metric up right away. But no data.



When I use the Heapster-Example as in the tutorial, it all works.



And the interesting part (to me):



NOTE: It happened a few times, with massive delay or after stoping/restarting the kubernetes cluster, that the data appeared. Suddenly. I could not reproduce as of now.



I must be missing something really simple.



Note: I don't see any errors in the logs (stackdriver) and UI (opentsdb UI), neither bigtable, nor Kubernetes, nor anything I can think of.



Note: the configs I am using are as linked in the tutorial.



The put I am using (see the 204):



enter image description here



and if I add ?details, it indicates success:



enter image description here










share|improve this question

















Updated with more information




I am trying to set up OpenTSDB on Bigtable, following this guide:
https://cloud.google.com/solutions/opentsdb-cloud-platform



Works well, all good.



Now I was trying to open the opentsdb-write service with a LoadBalancer (type). Seems to work well, too.



Note: using a GCP load balancer.



I am then using insomnia to send a POST to the ./api/put endpoint - and I get a 204 as expected (also, using the ?details shows no errors, neither does the ?sync) (see http://opentsdb.net/docs/build/html/api_http/put.html)



When querying the data (GET on ./api/query), I don't see the data (same effect in grafana). Also, I do not see any data added in the tsdb table in bigtable.



My conclusion: no data is written to Bigtable, although tsd is returning 204.



Interesting fact: the metric is created (I can see it in Bigtable (cbt read tsdb-uid) and also the autocomplete in the opentsdb-ui (and grafana) pick the metric up right away. But no data.



When I use the Heapster-Example as in the tutorial, it all works.



And the interesting part (to me):



NOTE: It happened a few times, with massive delay or after stoping/restarting the kubernetes cluster, that the data appeared. Suddenly. I could not reproduce as of now.



I must be missing something really simple.



Note: I don't see any errors in the logs (stackdriver) and UI (opentsdb UI), neither bigtable, nor Kubernetes, nor anything I can think of.



Note: the configs I am using are as linked in the tutorial.



The put I am using (see the 204):



enter image description here



and if I add ?details, it indicates success:



enter image description here







kubernetes google-cloud-platform grafana bigtable opentsdb






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 2 at 10:49







Pinguin Dirk

















asked Jan 1 at 22:22









Pinguin DirkPinguin Dirk

1,167913




1,167913








  • 1





    We had the somewhat same error and found that when writing a metric around 25 times it somehow magically appears. But we haven't found the issue yet.

    – dbanck
    Jan 2 at 16:31











  • thanks for the comment @dbanck - I just found the same, must be held back somewhere. As soon as it has some 10 datapoints, it writes them to bigtable. So far, I couldn't see (1.) that data is lost in the process and (2.) how to solve it. Will investigate some more

    – Pinguin Dirk
    Jan 2 at 16:34














  • 1





    We had the somewhat same error and found that when writing a metric around 25 times it somehow magically appears. But we haven't found the issue yet.

    – dbanck
    Jan 2 at 16:31











  • thanks for the comment @dbanck - I just found the same, must be held back somewhere. As soon as it has some 10 datapoints, it writes them to bigtable. So far, I couldn't see (1.) that data is lost in the process and (2.) how to solve it. Will investigate some more

    – Pinguin Dirk
    Jan 2 at 16:34








1




1





We had the somewhat same error and found that when writing a metric around 25 times it somehow magically appears. But we haven't found the issue yet.

– dbanck
Jan 2 at 16:31





We had the somewhat same error and found that when writing a metric around 25 times it somehow magically appears. But we haven't found the issue yet.

– dbanck
Jan 2 at 16:31













thanks for the comment @dbanck - I just found the same, must be held back somewhere. As soon as it has some 10 datapoints, it writes them to bigtable. So far, I couldn't see (1.) that data is lost in the process and (2.) how to solve it. Will investigate some more

– Pinguin Dirk
Jan 2 at 16:34





thanks for the comment @dbanck - I just found the same, must be held back somewhere. As soon as it has some 10 datapoints, it writes them to bigtable. So far, I couldn't see (1.) that data is lost in the process and (2.) how to solve it. Will investigate some more

– Pinguin Dirk
Jan 2 at 16:34












1 Answer
1






active

oldest

votes


















1














My guess is that this relates to the opentsdb flush frequency. When a tsdb cluster is shutdown, there's an automatic flush. I'm not 100% sure, but I think that the tsd.storage.flush_interval configuration manages that process.



You can reach the team that maintains the libraries via the google-cloud-bigtable-discuss group, which you can get to from the Cloud Bigtable support page for more nuanced discussions.



As an FYI, we (Google) are actively updating the https://cloud.google.com/solutions/opentsdb-cloud-platform to the latest versions of OpenTSDB and AsyncBigtable which should improve performance at high volumes.






share|improve this answer
























  • Thanks for the explanation - I will look into this. Currently, writing more data, we are not experiencing the problem anymore. But I will have a look, out of curiousity. Also, thanks for the FYI and the big effort of you & team!

    – Pinguin Dirk
    Jan 10 at 20:39











  • fyi, this one might be related: stackoverflow.com/questions/31383406/… -- added a comment to this question/answer

    – Pinguin Dirk
    Jan 12 at 10:47











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53999403%2fwriting-opentsdb-to-bigtable-with-http-post-not-working-using-kubernetes%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














My guess is that this relates to the opentsdb flush frequency. When a tsdb cluster is shutdown, there's an automatic flush. I'm not 100% sure, but I think that the tsd.storage.flush_interval configuration manages that process.



You can reach the team that maintains the libraries via the google-cloud-bigtable-discuss group, which you can get to from the Cloud Bigtable support page for more nuanced discussions.



As an FYI, we (Google) are actively updating the https://cloud.google.com/solutions/opentsdb-cloud-platform to the latest versions of OpenTSDB and AsyncBigtable which should improve performance at high volumes.






share|improve this answer
























  • Thanks for the explanation - I will look into this. Currently, writing more data, we are not experiencing the problem anymore. But I will have a look, out of curiousity. Also, thanks for the FYI and the big effort of you & team!

    – Pinguin Dirk
    Jan 10 at 20:39











  • fyi, this one might be related: stackoverflow.com/questions/31383406/… -- added a comment to this question/answer

    – Pinguin Dirk
    Jan 12 at 10:47
















1














My guess is that this relates to the opentsdb flush frequency. When a tsdb cluster is shutdown, there's an automatic flush. I'm not 100% sure, but I think that the tsd.storage.flush_interval configuration manages that process.



You can reach the team that maintains the libraries via the google-cloud-bigtable-discuss group, which you can get to from the Cloud Bigtable support page for more nuanced discussions.



As an FYI, we (Google) are actively updating the https://cloud.google.com/solutions/opentsdb-cloud-platform to the latest versions of OpenTSDB and AsyncBigtable which should improve performance at high volumes.






share|improve this answer
























  • Thanks for the explanation - I will look into this. Currently, writing more data, we are not experiencing the problem anymore. But I will have a look, out of curiousity. Also, thanks for the FYI and the big effort of you & team!

    – Pinguin Dirk
    Jan 10 at 20:39











  • fyi, this one might be related: stackoverflow.com/questions/31383406/… -- added a comment to this question/answer

    – Pinguin Dirk
    Jan 12 at 10:47














1












1








1







My guess is that this relates to the opentsdb flush frequency. When a tsdb cluster is shutdown, there's an automatic flush. I'm not 100% sure, but I think that the tsd.storage.flush_interval configuration manages that process.



You can reach the team that maintains the libraries via the google-cloud-bigtable-discuss group, which you can get to from the Cloud Bigtable support page for more nuanced discussions.



As an FYI, we (Google) are actively updating the https://cloud.google.com/solutions/opentsdb-cloud-platform to the latest versions of OpenTSDB and AsyncBigtable which should improve performance at high volumes.






share|improve this answer













My guess is that this relates to the opentsdb flush frequency. When a tsdb cluster is shutdown, there's an automatic flush. I'm not 100% sure, but I think that the tsd.storage.flush_interval configuration manages that process.



You can reach the team that maintains the libraries via the google-cloud-bigtable-discuss group, which you can get to from the Cloud Bigtable support page for more nuanced discussions.



As an FYI, we (Google) are actively updating the https://cloud.google.com/solutions/opentsdb-cloud-platform to the latest versions of OpenTSDB and AsyncBigtable which should improve performance at high volumes.







share|improve this answer












share|improve this answer



share|improve this answer










answered Jan 10 at 20:21









Solomon DuskisSolomon Duskis

2,1361110




2,1361110













  • Thanks for the explanation - I will look into this. Currently, writing more data, we are not experiencing the problem anymore. But I will have a look, out of curiousity. Also, thanks for the FYI and the big effort of you & team!

    – Pinguin Dirk
    Jan 10 at 20:39











  • fyi, this one might be related: stackoverflow.com/questions/31383406/… -- added a comment to this question/answer

    – Pinguin Dirk
    Jan 12 at 10:47



















  • Thanks for the explanation - I will look into this. Currently, writing more data, we are not experiencing the problem anymore. But I will have a look, out of curiousity. Also, thanks for the FYI and the big effort of you & team!

    – Pinguin Dirk
    Jan 10 at 20:39











  • fyi, this one might be related: stackoverflow.com/questions/31383406/… -- added a comment to this question/answer

    – Pinguin Dirk
    Jan 12 at 10:47

















Thanks for the explanation - I will look into this. Currently, writing more data, we are not experiencing the problem anymore. But I will have a look, out of curiousity. Also, thanks for the FYI and the big effort of you & team!

– Pinguin Dirk
Jan 10 at 20:39





Thanks for the explanation - I will look into this. Currently, writing more data, we are not experiencing the problem anymore. But I will have a look, out of curiousity. Also, thanks for the FYI and the big effort of you & team!

– Pinguin Dirk
Jan 10 at 20:39













fyi, this one might be related: stackoverflow.com/questions/31383406/… -- added a comment to this question/answer

– Pinguin Dirk
Jan 12 at 10:47





fyi, this one might be related: stackoverflow.com/questions/31383406/… -- added a comment to this question/answer

– Pinguin Dirk
Jan 12 at 10:47




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53999403%2fwriting-opentsdb-to-bigtable-with-http-post-not-working-using-kubernetes%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

Npm cannot find a required file even through it is in the searched directory

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith