logging - How to make the logstash 2.3.2 configuration file more flexible -
i using logstash 2.3.2 read , parse log file of wso2 esb. able parse log entries , send them api in json format.
in log file there different log levels such "info,error, warn , debug". currently, sending log entry through, if logtype error.
sample log file:
tid: [-1234] [] [2016-05-26 11:22:34,366] info {org.wso2.carbon.application.deployer.internal.applicationmanager} - undeploying carbon application : customerservice_ca_01_001_1.0.0... {org.wso2.carbon.application.deployer.internal.applicationmanager} tid: [-1234] [] [2016-05-26 11:22:35,539] info {org.apache.axis2.transport.jms.servicetaskmanager} - task manager service : customerservice_01_001 shutdown {org.apache.axis2.transport.jms.servicetaskmanager} tid: [-1234] [] [2016-05-26 11:22:35,545] info {org.apache.axis2.transport.jms.jmslistener} - stopped listening jms messages service : customerservice_01_001 {org.apache.axis2.transport.jms.jmslistener} tid: [-1234] [] [2016-05-26 11:22:35,549] info {org.apache.synapse.core.axis2.proxyservice} - stopped proxy service : customerservice_01_001 {org.apache.synapse.core.axis2.proxyservice} tid: [-1234] [] [2016-05-26 11:22:35,553] info {org.wso2.carbon.core.deployment.deploymentinterceptor} - removing axis2 service: customerservice_01_001 {super-tenant} {org.wso2.carbon.core.deployment.deploymentinterceptor} tid: [-1234] [] [2016-05-26 11:22:35,572] info {org.apache.synapse.deployers.proxyservicedeployer} - proxyservice named 'customerservice_01_001' has been undeployed {org.apache.synapse.deployers.proxyservicedeployer} tid: [-1234] [] [2016-05-26 18:10:26,465] info {org.apache.synapse.mediators.builtin.logmediator} - to: logaftervalidationwsaction: urn:mediatelogaftervalidationsoapaction: urn:mediatelogaftervalidationmessageid: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8blogaftervalidationdirection: response {org.apache.synapse.mediators.builtin.logmediator} tid: [-1234] [] [2016-05-26 18:10:26,469] info {org.apache.synapse.mediators.builtin.logmediator} - to: xpath-loglastnamewsaction: urn:mediatexpath-loglastnamesoapaction: urn:mediatexpath-loglastnamemessageid: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8bxpath-loglastnamedirection: responsexpath-loglastnameproperty_name lastname_value = xpath-loglastnameenvelope: tid: [-1234] [] [2016-05-26 18:10:26,477] error {org.apache.synapse.mediators.transform.xsltmediator} - evaluation of xpath expression //tns1:customer did not result in omnode : null {org.apache.synapse.mediators.transform.xsltmediator} tid: [-1234] [] [2016-05-26 18:10:26,478] error {org.apache.synapse.mediators.transform.xsltmediator} - unable perform xslt transformation using : value {name ='null', keyvalue ='gov:customerservice/01/xslt/customertocustomerschemamapping.xslt'} against source xpath : //tns1:customer reason : evaluation of xpath expression //tns1:customer did not result in omnode : null {org.apache.synapse.mediators.transform.xsltmediator} org.apache.synapse.synapseexception: evaluation of xpath expression //tns1:customer did not result in omnode : null @ org.apache.synapse.util.xpath.sourcexpathsupport.selectomnode(sourcexpathsupport.java:100) @ org.apache.synapse.mediators.transform.xsltmediator.performxslt(xsltmediator.java:216) @ org.apache.synapse.mediators.transform.xsltmediator.mediate(xsltmediator.java:196) @ org.apache.synapse.mediators.abstractlistmediator.mediate(abstractlistmediator.java:81) @ org.apache.synapse.mediators.abstractlistmediator.mediate(abstractlistmediator.java:48) @ org.apache.synapse.mediators.base.sequencemediator.mediate(sequencemediator.java:149) @ org.apache.synapse.mediators.base.sequencemediator.mediate(sequencemediator.java:214) @ org.apache.synapse.mediators.abstractlistmediator.mediate(abstractlistmediator.java:81) @ org.apache.synapse.mediators.abstractlistmediator.mediate(abstractlistmediator.java:48) @ org.apache.synapse.mediators.base.sequencemediator.mediate(sequencemediator.java:149) @ org.apache.synapse.core.axis2.proxyservicemessagereceiver.receive(proxyservicemessagereceiver.java:185) @ org.apache.axis2.engine.axisengine.receive(axisengine.java:180) @ org.apache.synapse.transport.passthru.serverworker.processentityenclosingrequest(serverworker.java:395) @ org.apache.synapse.transport.passthru.serverworker.run(serverworker.java:142) @ org.apache.axis2.transport.base.threads.nativeworkerpool$1.run(nativeworkerpool.java:172) @ java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor.java:1145) @ java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:615) @ java.lang.thread.run(thread.java:744) tid: [-1234] [] [2016-05-26 18:10:26,500] info {org.apache.synapse.mediators.builtin.logmediator} - to: , wsaction: urn:mediate, soapaction: urn:mediate, messageid: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8b, direction: response {org.apache.synapse.mediators.builtin.logmediator} tid: [-1234] [] [2016-05-26 11:32:24,272] warn {org.wso2.carbon.core.bootup.validator.util.validationresultprinter} - running os : windows 8 not tested operating system running wso2 carbon {org.wso2.carbon.core.bootup.validator.util.validationresultprinter} tid: [-1234] [] [2016-05-26 11:32:24,284] warn {org.wso2.carbon.core.bootup.validator.util.validationresultprinter} - carbon configured use default keystore (wso2carbon.jks). maximize security when deploying production environment, configure new keystore unique password in production server profile. {org.wso2.carbon.core.bootup.validator.util.validationresultprinter} tid: [-1] [] [2016-05-26 11:32:24,315] info {org.wso2.carbon.databridge.agent.thrift.agentholder} - agent created ! {org.wso2.carbon.databridge.agent.thrift.agentholder}
configuration file:
input { stdin {} file { path => "c:\mydocument\project\sampleesblogs\wso2carbon.log" type => "wso2carbon" start_position => "beginning" codec => multiline { pattern => "(^\s*at .+)|^(?!tid).*$" negate => false => "previous" } } } filter { if [type] == "wso2carbon" { grok { match => [ "message", "tid:%{space}\[%{int:log_sourcesystemid}\]%{space}\[%{data:log_processname}\]%{space}\[%{timestamp_iso8601:timestamp}\]%{space}%{loglevel:log_messagetype}%{space}{%{javaclass:log_messagetitle}}%{space}-%{space}%{greedydata:log_message}" ] add_tag => [ "grokked" ] } if "grokked" in [tags] { grok { match => ["log_messagetype", "error"] add_tag => [ "loglevelerror" ] } } if !( "_grokparsefailure" in [tags] ) { grok{ match => [ "message", "%{greedydata:log_stacktrace}" ] add_tag => [ "grokked" ] } date { match => [ "timestamp", "yyyy mmm dd hh:mm:ss:sss" ] target => "timestamp" timezone => "utc" } } } } if ( "multiline" in [tags] ) { grok { match => [ "message", "%{greedydata:log_stacktrace}" ] add_tag => [ "multiline" ] tag_on_failure => [ "multiline" ] } date { match => [ "timestamp", "yyyy mmm dd hh:mm:ss:sss" ] target => "timestamp" } } } output { if [type] == "wso2carbon" { if "loglevelerror" in [tags] { stdout { } http { url => "https://localhost:8086/messages" http_method => "post" format => "json" mapping => ["timestamp","%{timestamp}","messagetype","%{log_messagetype}","messagetitle","%{log_messagetitle}","message","%{log_message}","sourcesystemid","%{log_sourcesystemid}","stacktrace","%{log_stacktrace}"] } } } }
problem statement :
i want provide flexible option user user can decide type of log entries need send towards api?. in existing setup "error" type log entries being sent towards api.
currently how doing :
currently doing in following way. first checking in filter if recenlty parsed log entry has error type, add tag log entry.
if "grokked" in [tags] { grok { match => ["log_messagetype", "error"] add_tag => [ "loglevelerror" ] } }
and in output section, againg checking in "if" condition if parsed entry has required tag, let go otherwise drop or ignore it.
if "loglevelerror" in [tags] { stdout { } http { .... } }
now, want check other log levels well, there other better way of doing this?. or have put in place similar if blocks same sttuff inside them condition different.
to sum up: if want provide option using configuration can choose via uncommenting or other way, type of log entry (info,warn,error,debug) want send api, how can acheieve that?
you can skip grok , use conditional check @ output. can check if value of field falls within array or matches value.
logstash conditional reference
to check if level error
if [log_messagetype] == "error" { # outputs }
to send error , warn
if [log_messagetype] in ["error", "warn"] { # outputs }
however careful not like
if [log_messagetype] in ["error"] {
this not act expected, see this question more info on that.
Comments
Post a Comment