Implementing default stackdriver behavior in GKE












0















I am setting up a GKE cluster for an application that has structured json logging that works very well with Kibana. However, I want to use stackdriver instead.



I see that the application's logs are available in stackdriver with the default cluster configurations. The logs appear as jsonpayload but I want more flexibility and configuration and when I do that following this guide, all of the logs for the same application appear only as textpayload. Ultimately, I want my logs to continue to show up in jsonpayload when I use fluentd agent configurations to take advantage of the label_map.



I followed the guide on removing the default logging service and deploying fluentd agent with an existing cluster with the below GKE versions.



Gcloud version info:



Google Cloud SDK 228.0.0
bq 2.0.39
core 2018.12.07
gsutil 4.34


kubectl version info:



Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.9-gke.5", GitCommit:"d776b4deeb3655fa4b8f4e8e7e4651d00c5f4a98", GitTreeState:"clean", BuildDate:"2018-11-08T20:33:00Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}


gcloud container cluster describe snippet:



addonsConfig:
httpLoadBalancing: {}
kubernetesDashboard:
disabled: true
networkPolicyConfig:
disabled: true
createTime: '2018-12-24T19:31:21+00:00'
currentMasterVersion: 1.10.9-gke.5
currentNodeCount: 3
currentNodeVersion: 1.10.9-gke.5
initialClusterVersion: 1.10.9-gke.5
ipAllocationPolicy: {}
legacyAbac: {}
location: us-central1-a
locations:
- us-central1-a
loggingService: none
masterAuth:
username: admin
masterAuthorizedNetworksConfig: {}
monitoringService: monitoring.googleapis.com
name: test-cluster-1
network: default
networkConfig:
network: projects/test/global/networks/default
subnetwork: projects/test/regions/us-central1/subnetworks/default
networkPolicy: {}
nodeConfig:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
serviceAccount: default
nodeIpv4CidrSize: 24
nodePools:
- autoscaling: {}
config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
serviceAccount: default
initialNodeCount: 3
management:
autoRepair: true
autoUpgrade: true
name: default-pool
status: RUNNING
version: 1.10.9-gke.5
status: RUNNING
subnetwork: default
zone: us-central1-a


Below is what is included in my configmap for the fluentd daemonset:



<source>
type tail
format none
time_key time
path /var/log/containers/*.log
pos_file /var/log/gcp-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%N%Z
tag reform.*
read_from_head true
</source>
<filter reform.**>
type parser
format json
reserve_data true
suppress_parse_error_log true
key_name log
</filter>


Here is an example json log from my application:
{"log":"org.test.interceptor","lvl":"INFO","thread":"main","msg":"Inbound Messagen----------------------------nID: 44nResponse-Code: 401nEncoding: UTF-8nContent-Type: application/json;charset=UTF-8nHeaders: {Date=[Mon, 31 Dec 2018 14:43:47 GMT], }nPayload: {"errorType":"AnException","details":["invalid credentials"],"message":"credentials are invalid"}n--------------------------------------","@timestamp":"2018-12-31T14:43:47.805+00:00","app":"the-app"}



The result with the above configuration is below:



{
insertId: "3vycfdg1drp34o"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0-nds8d"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "the-app-68fb6c5c8-mq5b5"
container.googleapis.com/stream: "stdout"
}
logName: "projects/test/logs/the-app"
receiveTimestamp: "2018-12-28T20:14:04.297451043Z"
resource: {
labels: {
cluster_name: "test-cluster-1"
container_name: "the-app"
instance_id: "234768123"
namespace_id: "default"
pod_id: "the-app-68fb6c5c8-mq5b5"
project_id: "test"
zone: "us-central1-a"
}
type: "container"
}
severity: "INFO"
textPayload: "org.test.interceptor"
timestamp: "2018-12-28T20:14:03Z"
}


I have even tried wrapping the json map into one field since it appears that only the "log" field is being parsed. I considered explicitly writing a parser but this seemed infeasible considering the log entry is already in json format and also the fields change from call to call and having to anticipate what fields to parse would not be ideal.



I expected that all of the fields in my log would appear in jsonPayload in the stackdriver log entry. I ultimately want to mimic what occurs with the default logging stackdriver service on a cluster where our logs at least appeared as jsonPayload.










share|improve this question

























  • Need more clarity on what's intended & expected. How logs don't appear as default?jsonPayload collects details from metadata, textPayload is what can be customized.

    – Asif Tanwir
    Jan 2 at 21:51











  • @AsifTanwir thanks for the comment. As mentioned in the original question, the default behavior is to have it appear in jsonPayload. The resulting behavior instead is only the value of the "log" field appearing in textPayload as seen in the examples posted. TextPayload can be customized but only when the log is not already in json format. Considering that the logs my application creates is already in json format, I expected the manually configured fluentd agent to pick that up and register it as jsonPayload field just like the default stackdriver in my gke cluster did.

    – akilah2010
    Jan 3 at 13:11













  • I suspect type tail - format none is not helping. Can you try setting the format to json or multiline, and update ?

    – Asif Tanwir
    Jan 3 at 21:52
















0















I am setting up a GKE cluster for an application that has structured json logging that works very well with Kibana. However, I want to use stackdriver instead.



I see that the application's logs are available in stackdriver with the default cluster configurations. The logs appear as jsonpayload but I want more flexibility and configuration and when I do that following this guide, all of the logs for the same application appear only as textpayload. Ultimately, I want my logs to continue to show up in jsonpayload when I use fluentd agent configurations to take advantage of the label_map.



I followed the guide on removing the default logging service and deploying fluentd agent with an existing cluster with the below GKE versions.



Gcloud version info:



Google Cloud SDK 228.0.0
bq 2.0.39
core 2018.12.07
gsutil 4.34


kubectl version info:



Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.9-gke.5", GitCommit:"d776b4deeb3655fa4b8f4e8e7e4651d00c5f4a98", GitTreeState:"clean", BuildDate:"2018-11-08T20:33:00Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}


gcloud container cluster describe snippet:



addonsConfig:
httpLoadBalancing: {}
kubernetesDashboard:
disabled: true
networkPolicyConfig:
disabled: true
createTime: '2018-12-24T19:31:21+00:00'
currentMasterVersion: 1.10.9-gke.5
currentNodeCount: 3
currentNodeVersion: 1.10.9-gke.5
initialClusterVersion: 1.10.9-gke.5
ipAllocationPolicy: {}
legacyAbac: {}
location: us-central1-a
locations:
- us-central1-a
loggingService: none
masterAuth:
username: admin
masterAuthorizedNetworksConfig: {}
monitoringService: monitoring.googleapis.com
name: test-cluster-1
network: default
networkConfig:
network: projects/test/global/networks/default
subnetwork: projects/test/regions/us-central1/subnetworks/default
networkPolicy: {}
nodeConfig:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
serviceAccount: default
nodeIpv4CidrSize: 24
nodePools:
- autoscaling: {}
config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
serviceAccount: default
initialNodeCount: 3
management:
autoRepair: true
autoUpgrade: true
name: default-pool
status: RUNNING
version: 1.10.9-gke.5
status: RUNNING
subnetwork: default
zone: us-central1-a


Below is what is included in my configmap for the fluentd daemonset:



<source>
type tail
format none
time_key time
path /var/log/containers/*.log
pos_file /var/log/gcp-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%N%Z
tag reform.*
read_from_head true
</source>
<filter reform.**>
type parser
format json
reserve_data true
suppress_parse_error_log true
key_name log
</filter>


Here is an example json log from my application:
{"log":"org.test.interceptor","lvl":"INFO","thread":"main","msg":"Inbound Messagen----------------------------nID: 44nResponse-Code: 401nEncoding: UTF-8nContent-Type: application/json;charset=UTF-8nHeaders: {Date=[Mon, 31 Dec 2018 14:43:47 GMT], }nPayload: {"errorType":"AnException","details":["invalid credentials"],"message":"credentials are invalid"}n--------------------------------------","@timestamp":"2018-12-31T14:43:47.805+00:00","app":"the-app"}



The result with the above configuration is below:



{
insertId: "3vycfdg1drp34o"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0-nds8d"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "the-app-68fb6c5c8-mq5b5"
container.googleapis.com/stream: "stdout"
}
logName: "projects/test/logs/the-app"
receiveTimestamp: "2018-12-28T20:14:04.297451043Z"
resource: {
labels: {
cluster_name: "test-cluster-1"
container_name: "the-app"
instance_id: "234768123"
namespace_id: "default"
pod_id: "the-app-68fb6c5c8-mq5b5"
project_id: "test"
zone: "us-central1-a"
}
type: "container"
}
severity: "INFO"
textPayload: "org.test.interceptor"
timestamp: "2018-12-28T20:14:03Z"
}


I have even tried wrapping the json map into one field since it appears that only the "log" field is being parsed. I considered explicitly writing a parser but this seemed infeasible considering the log entry is already in json format and also the fields change from call to call and having to anticipate what fields to parse would not be ideal.



I expected that all of the fields in my log would appear in jsonPayload in the stackdriver log entry. I ultimately want to mimic what occurs with the default logging stackdriver service on a cluster where our logs at least appeared as jsonPayload.










share|improve this question

























  • Need more clarity on what's intended & expected. How logs don't appear as default?jsonPayload collects details from metadata, textPayload is what can be customized.

    – Asif Tanwir
    Jan 2 at 21:51











  • @AsifTanwir thanks for the comment. As mentioned in the original question, the default behavior is to have it appear in jsonPayload. The resulting behavior instead is only the value of the "log" field appearing in textPayload as seen in the examples posted. TextPayload can be customized but only when the log is not already in json format. Considering that the logs my application creates is already in json format, I expected the manually configured fluentd agent to pick that up and register it as jsonPayload field just like the default stackdriver in my gke cluster did.

    – akilah2010
    Jan 3 at 13:11













  • I suspect type tail - format none is not helping. Can you try setting the format to json or multiline, and update ?

    – Asif Tanwir
    Jan 3 at 21:52














0












0








0








I am setting up a GKE cluster for an application that has structured json logging that works very well with Kibana. However, I want to use stackdriver instead.



I see that the application's logs are available in stackdriver with the default cluster configurations. The logs appear as jsonpayload but I want more flexibility and configuration and when I do that following this guide, all of the logs for the same application appear only as textpayload. Ultimately, I want my logs to continue to show up in jsonpayload when I use fluentd agent configurations to take advantage of the label_map.



I followed the guide on removing the default logging service and deploying fluentd agent with an existing cluster with the below GKE versions.



Gcloud version info:



Google Cloud SDK 228.0.0
bq 2.0.39
core 2018.12.07
gsutil 4.34


kubectl version info:



Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.9-gke.5", GitCommit:"d776b4deeb3655fa4b8f4e8e7e4651d00c5f4a98", GitTreeState:"clean", BuildDate:"2018-11-08T20:33:00Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}


gcloud container cluster describe snippet:



addonsConfig:
httpLoadBalancing: {}
kubernetesDashboard:
disabled: true
networkPolicyConfig:
disabled: true
createTime: '2018-12-24T19:31:21+00:00'
currentMasterVersion: 1.10.9-gke.5
currentNodeCount: 3
currentNodeVersion: 1.10.9-gke.5
initialClusterVersion: 1.10.9-gke.5
ipAllocationPolicy: {}
legacyAbac: {}
location: us-central1-a
locations:
- us-central1-a
loggingService: none
masterAuth:
username: admin
masterAuthorizedNetworksConfig: {}
monitoringService: monitoring.googleapis.com
name: test-cluster-1
network: default
networkConfig:
network: projects/test/global/networks/default
subnetwork: projects/test/regions/us-central1/subnetworks/default
networkPolicy: {}
nodeConfig:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
serviceAccount: default
nodeIpv4CidrSize: 24
nodePools:
- autoscaling: {}
config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
serviceAccount: default
initialNodeCount: 3
management:
autoRepair: true
autoUpgrade: true
name: default-pool
status: RUNNING
version: 1.10.9-gke.5
status: RUNNING
subnetwork: default
zone: us-central1-a


Below is what is included in my configmap for the fluentd daemonset:



<source>
type tail
format none
time_key time
path /var/log/containers/*.log
pos_file /var/log/gcp-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%N%Z
tag reform.*
read_from_head true
</source>
<filter reform.**>
type parser
format json
reserve_data true
suppress_parse_error_log true
key_name log
</filter>


Here is an example json log from my application:
{"log":"org.test.interceptor","lvl":"INFO","thread":"main","msg":"Inbound Messagen----------------------------nID: 44nResponse-Code: 401nEncoding: UTF-8nContent-Type: application/json;charset=UTF-8nHeaders: {Date=[Mon, 31 Dec 2018 14:43:47 GMT], }nPayload: {"errorType":"AnException","details":["invalid credentials"],"message":"credentials are invalid"}n--------------------------------------","@timestamp":"2018-12-31T14:43:47.805+00:00","app":"the-app"}



The result with the above configuration is below:



{
insertId: "3vycfdg1drp34o"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0-nds8d"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "the-app-68fb6c5c8-mq5b5"
container.googleapis.com/stream: "stdout"
}
logName: "projects/test/logs/the-app"
receiveTimestamp: "2018-12-28T20:14:04.297451043Z"
resource: {
labels: {
cluster_name: "test-cluster-1"
container_name: "the-app"
instance_id: "234768123"
namespace_id: "default"
pod_id: "the-app-68fb6c5c8-mq5b5"
project_id: "test"
zone: "us-central1-a"
}
type: "container"
}
severity: "INFO"
textPayload: "org.test.interceptor"
timestamp: "2018-12-28T20:14:03Z"
}


I have even tried wrapping the json map into one field since it appears that only the "log" field is being parsed. I considered explicitly writing a parser but this seemed infeasible considering the log entry is already in json format and also the fields change from call to call and having to anticipate what fields to parse would not be ideal.



I expected that all of the fields in my log would appear in jsonPayload in the stackdriver log entry. I ultimately want to mimic what occurs with the default logging stackdriver service on a cluster where our logs at least appeared as jsonPayload.










share|improve this question
















I am setting up a GKE cluster for an application that has structured json logging that works very well with Kibana. However, I want to use stackdriver instead.



I see that the application's logs are available in stackdriver with the default cluster configurations. The logs appear as jsonpayload but I want more flexibility and configuration and when I do that following this guide, all of the logs for the same application appear only as textpayload. Ultimately, I want my logs to continue to show up in jsonpayload when I use fluentd agent configurations to take advantage of the label_map.



I followed the guide on removing the default logging service and deploying fluentd agent with an existing cluster with the below GKE versions.



Gcloud version info:



Google Cloud SDK 228.0.0
bq 2.0.39
core 2018.12.07
gsutil 4.34


kubectl version info:



Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.9-gke.5", GitCommit:"d776b4deeb3655fa4b8f4e8e7e4651d00c5f4a98", GitTreeState:"clean", BuildDate:"2018-11-08T20:33:00Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}


gcloud container cluster describe snippet:



addonsConfig:
httpLoadBalancing: {}
kubernetesDashboard:
disabled: true
networkPolicyConfig:
disabled: true
createTime: '2018-12-24T19:31:21+00:00'
currentMasterVersion: 1.10.9-gke.5
currentNodeCount: 3
currentNodeVersion: 1.10.9-gke.5
initialClusterVersion: 1.10.9-gke.5
ipAllocationPolicy: {}
legacyAbac: {}
location: us-central1-a
locations:
- us-central1-a
loggingService: none
masterAuth:
username: admin
masterAuthorizedNetworksConfig: {}
monitoringService: monitoring.googleapis.com
name: test-cluster-1
network: default
networkConfig:
network: projects/test/global/networks/default
subnetwork: projects/test/regions/us-central1/subnetworks/default
networkPolicy: {}
nodeConfig:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
serviceAccount: default
nodeIpv4CidrSize: 24
nodePools:
- autoscaling: {}
config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
serviceAccount: default
initialNodeCount: 3
management:
autoRepair: true
autoUpgrade: true
name: default-pool
status: RUNNING
version: 1.10.9-gke.5
status: RUNNING
subnetwork: default
zone: us-central1-a


Below is what is included in my configmap for the fluentd daemonset:



<source>
type tail
format none
time_key time
path /var/log/containers/*.log
pos_file /var/log/gcp-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%N%Z
tag reform.*
read_from_head true
</source>
<filter reform.**>
type parser
format json
reserve_data true
suppress_parse_error_log true
key_name log
</filter>


Here is an example json log from my application:
{"log":"org.test.interceptor","lvl":"INFO","thread":"main","msg":"Inbound Messagen----------------------------nID: 44nResponse-Code: 401nEncoding: UTF-8nContent-Type: application/json;charset=UTF-8nHeaders: {Date=[Mon, 31 Dec 2018 14:43:47 GMT], }nPayload: {"errorType":"AnException","details":["invalid credentials"],"message":"credentials are invalid"}n--------------------------------------","@timestamp":"2018-12-31T14:43:47.805+00:00","app":"the-app"}



The result with the above configuration is below:



{
insertId: "3vycfdg1drp34o"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0-nds8d"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "the-app-68fb6c5c8-mq5b5"
container.googleapis.com/stream: "stdout"
}
logName: "projects/test/logs/the-app"
receiveTimestamp: "2018-12-28T20:14:04.297451043Z"
resource: {
labels: {
cluster_name: "test-cluster-1"
container_name: "the-app"
instance_id: "234768123"
namespace_id: "default"
pod_id: "the-app-68fb6c5c8-mq5b5"
project_id: "test"
zone: "us-central1-a"
}
type: "container"
}
severity: "INFO"
textPayload: "org.test.interceptor"
timestamp: "2018-12-28T20:14:03Z"
}


I have even tried wrapping the json map into one field since it appears that only the "log" field is being parsed. I considered explicitly writing a parser but this seemed infeasible considering the log entry is already in json format and also the fields change from call to call and having to anticipate what fields to parse would not be ideal.



I expected that all of the fields in my log would appear in jsonPayload in the stackdriver log entry. I ultimately want to mimic what occurs with the default logging stackdriver service on a cluster where our logs at least appeared as jsonPayload.







fluentd stackdriver gke google-cloud-stackdriver






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 3 at 13:12







akilah2010

















asked Jan 2 at 16:44









akilah2010akilah2010

113




113













  • Need more clarity on what's intended & expected. How logs don't appear as default?jsonPayload collects details from metadata, textPayload is what can be customized.

    – Asif Tanwir
    Jan 2 at 21:51











  • @AsifTanwir thanks for the comment. As mentioned in the original question, the default behavior is to have it appear in jsonPayload. The resulting behavior instead is only the value of the "log" field appearing in textPayload as seen in the examples posted. TextPayload can be customized but only when the log is not already in json format. Considering that the logs my application creates is already in json format, I expected the manually configured fluentd agent to pick that up and register it as jsonPayload field just like the default stackdriver in my gke cluster did.

    – akilah2010
    Jan 3 at 13:11













  • I suspect type tail - format none is not helping. Can you try setting the format to json or multiline, and update ?

    – Asif Tanwir
    Jan 3 at 21:52



















  • Need more clarity on what's intended & expected. How logs don't appear as default?jsonPayload collects details from metadata, textPayload is what can be customized.

    – Asif Tanwir
    Jan 2 at 21:51











  • @AsifTanwir thanks for the comment. As mentioned in the original question, the default behavior is to have it appear in jsonPayload. The resulting behavior instead is only the value of the "log" field appearing in textPayload as seen in the examples posted. TextPayload can be customized but only when the log is not already in json format. Considering that the logs my application creates is already in json format, I expected the manually configured fluentd agent to pick that up and register it as jsonPayload field just like the default stackdriver in my gke cluster did.

    – akilah2010
    Jan 3 at 13:11













  • I suspect type tail - format none is not helping. Can you try setting the format to json or multiline, and update ?

    – Asif Tanwir
    Jan 3 at 21:52

















Need more clarity on what's intended & expected. How logs don't appear as default?jsonPayload collects details from metadata, textPayload is what can be customized.

– Asif Tanwir
Jan 2 at 21:51





Need more clarity on what's intended & expected. How logs don't appear as default?jsonPayload collects details from metadata, textPayload is what can be customized.

– Asif Tanwir
Jan 2 at 21:51













@AsifTanwir thanks for the comment. As mentioned in the original question, the default behavior is to have it appear in jsonPayload. The resulting behavior instead is only the value of the "log" field appearing in textPayload as seen in the examples posted. TextPayload can be customized but only when the log is not already in json format. Considering that the logs my application creates is already in json format, I expected the manually configured fluentd agent to pick that up and register it as jsonPayload field just like the default stackdriver in my gke cluster did.

– akilah2010
Jan 3 at 13:11







@AsifTanwir thanks for the comment. As mentioned in the original question, the default behavior is to have it appear in jsonPayload. The resulting behavior instead is only the value of the "log" field appearing in textPayload as seen in the examples posted. TextPayload can be customized but only when the log is not already in json format. Considering that the logs my application creates is already in json format, I expected the manually configured fluentd agent to pick that up and register it as jsonPayload field just like the default stackdriver in my gke cluster did.

– akilah2010
Jan 3 at 13:11















I suspect type tail - format none is not helping. Can you try setting the format to json or multiline, and update ?

– Asif Tanwir
Jan 3 at 21:52





I suspect type tail - format none is not helping. Can you try setting the format to json or multiline, and update ?

– Asif Tanwir
Jan 3 at 21:52












1 Answer
1






active

oldest

votes


















1














I suspect type tail - format none in your configmap for the fluentd daemonset is not helping. Can you try setting the format to json or multiline, and update ?





type tail



format none






share|improve this answer
























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54010062%2fimplementing-default-stackdriver-behavior-in-gke%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    I suspect type tail - format none in your configmap for the fluentd daemonset is not helping. Can you try setting the format to json or multiline, and update ?





    type tail



    format none






    share|improve this answer




























      1














      I suspect type tail - format none in your configmap for the fluentd daemonset is not helping. Can you try setting the format to json or multiline, and update ?





      type tail



      format none






      share|improve this answer


























        1












        1








        1







        I suspect type tail - format none in your configmap for the fluentd daemonset is not helping. Can you try setting the format to json or multiline, and update ?





        type tail



        format none






        share|improve this answer













        I suspect type tail - format none in your configmap for the fluentd daemonset is not helping. Can you try setting the format to json or multiline, and update ?





        type tail



        format none







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jan 8 at 15:35









        Asif TanwirAsif Tanwir

        1036




        1036
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54010062%2fimplementing-default-stackdriver-behavior-in-gke%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            MongoDB - Not Authorized To Execute Command

            Npm cannot find a required file even through it is in the searched directory

            in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith