info@yenlo.com
eng
Menu
WSO2 8 min

Getting Console Log Output in JSON Log Format

Explore expert insights on log processing and JSON formatting in Fahri Kutay Özüdoğru's latest blog at Yenlo. Learn how to optimize your data processing pipeline efficiently!

Fahri Kutay Özüdoğru
Fahri Kutay Özüdoğru
Integration Consultant
Getting Console Log Output in JSON Log Format

Logstash (part of the Elastic Stack) is a data processing pipeline that allows you to collect data from various sources, transform it on the fly, and send it to your desired destination. It is most often used as a data pipeline for Elasticsearch, an open-source analytics and search engine. However, if your data is already in JSON format, there’s no need to use Logstash. By converting the WSO2 default Log4j format into JSON Log Format, you can bypass Logstash entirely. You can simply send logs to Filebeat, which will then store them in Elasticsearch.

Log4J and Log4J2 are de facto standard frameworks for logging application messages in Java. Basically, when you are developing Java applications, you include several statements that log information according to the level that you are setting. Log4j is also the default application logging mechanism for WSO2 containers. Here is a great article, by our colleague Rob Blaauboer, if you would like to read more on LOG4J2 and WSO2 interaction.

If you are using ELK stack to publish data from various sources in different formats and search, analyze, and visualize them and not at the liberty to configure Logstash to convert the log lines into JSON Strings due to limitations of your project, or if you would like to avoid unnecessary serializing de-serializing operations which can be resource intensive; you might want to bypass Logstash and convert your log output to be in JSON Log format instead.

Creating a Custom JSON Log Mediator

First, if we want to provide properties and display them as separate key/value pairs, we will need our own Log Mediator, since the built-in Mediator will set properties the following way:

private void setCustomProperties(StringBuffer sb, MessageContext synCtx) {
    if (properties != null && !properties.isEmpty()) {
        for (MediatorProperty property : properties) {
            if(property != null){
            sb.append(separator).append(property.getName()).append(" = ").append(property.getValue()
                    != null ? property.getValue() :
                    property.getEvaluatedExpression(synCtx));
            }
        }
    }
}

Upon investigating the built-in log mediator of WSO2, we can see that properties provided to the mediator are logged in the following format:

Separator (default is ‘,’) propertyName = propertyValue

Instead of StringBuilder, which would make it harder for us to build JSON Objects, we will use a HashMap  and map the properties to this map we are creating.

private Map getCustomLogMessage(MessageContext synCtx) {
    Map<String,String> map = new HashMap<>();
    setCustomProperties(map, synCtx);
    return map;
}

private void setCustomProperties(Map<String,String> map, MessageContext synCtx) {
    if (properties != null && !properties.isEmpty()) {
        for (MediatorProperty property : properties) {
            if(property != null){
                map.put(property.getName(), property.getValue() != null ? property.getValue() : property.getEvaluatedExpression(synCtx));
            }
        }
    }
}

Since public void auditLog(Object msg) method of MediatorLog class in the code below formats the message by calling the getFormattedLog from LoggingUtils class,

/**
 * Log a message at level INFO to all available/enabled logs.
 */
public void auditLog(Object msg) {
    String formattedMsg = LoggingUtils.getFormattedLog(synCtx, msg);
    defaultLog.info(formattedMsg);
    if (synCtx.getServiceLog() != null) {
        synCtx.getServiceLog().info(msg);
    }
    if (traceOn) {
        traceLog.info(formattedMsg);
    }
}

we will put the message in ThreadContext or call org.apache.log4j.Logger class info (or any level) method directly, without getting the log formatted.

switch (category) {
    case CATEGORY_INFO :
        //synLog.auditLog(getLogMessage(synCtx));
        ThreadContext.putAll(getLogMessage(synCtx));
        log.info(new ObjectMessage(getLogMessage(synCtx)));
        break;
    case CATEGORY_ERROR :
        //synLog.auditError(getLogMessage(synCtx));
        ThreadContext.putAll(getLogMessage(synCtx));
        log.error(new ObjectMessage(getLogMessage(synCtx)));
        break;
}

//Clear the map!
ThreadContext.clearMap();

synLog.traceOrDebug("End : Log mediator");

If you put the message in thread context, make sure to clear the map. Not doing this will result that old properties from different log actions might appear in new log events. It’s because of the nature of the MDC mechanism that is used by the ThreadContext.

We will create an OSGi bundle with Factory and Serializer classes, custom mediator QName is jsonLog for this case, which is the custom mediator name we are going to use when calling this mediator.

We will subsequently drop the .jar file into {EI_HOME}/dropins folder.

Configure log4j.properties file (Enterprise Integrator 6.x.x)

Second step is to configure how your logs are going to be formatted. There are plenty of layout libraries available. For this project we have used JSONEventLayoutV1. Build the source code and move it into {EI_HOME}/lib folder. Then you can configure the log4j.properties file. Example:

log4j.appender.CARBON_CONSOLE=org.wso2.carbon.utils.logging.appenders.CarbonConsoleAppender
log4j.appender.CARBON_CONSOLE.layout=net.logstash.log4j.JSONEventLayoutV1

Example Usage of Custom Log Mediator

Here is an example below on how to use our new custom log mediator. The usage will be similar to the built-in Log Mediator. We will provide some properties like correlation Id, message Id, name of the sequence, source of the message, destination of the message, a short description of the API and the payload itself; just to test if the JSON Object is properly created.

<jsonLog level="custom">
    <property expression="$func:correlationid" name="correlationid"/>
    <property expression="$func:message" name="messageid"/>
    <property expression="$func:sequence" name="sequence"/>
    <property expression="$func:source" name="source"/>
    <property expression="$func:destination" name="destination"/>
    <property expression="$func:description" name="description"/>
    <property expression="$ctx:current_body" name="payload"/>
</jsonLog>  

These configurations will output the following json console log:

{
  "@timestamp": "2022-11-21T03:59:00.542Z",
  "ecs.version": "1.2.0",
  "log.level": "INFO",
  "message": "{sequence=Generic_FaultSeq_v1, payload=<jsonObject><key1>value1</key1><key2>value2</key2></jsonObject>, destination=null, messageid=null, description=Start fault processing, correlationid=null, source=null}",
  "process.thread.name": "PassThroughMessageProcessor-1",
  "log.logger": "nl.rijksweb.mediator.JsonLogMediator",
  "labels": {
    "Correlation-ID": "2b530f10-e410-4cc7-878a-5dd42793aac2",
    "bundle.id": "255",
    "bundle.name": "synapse-core",
    "bundle.version": "2.1.7.wso2v271",
    "correlationid": "1",
    "description": "Start fault processing",
    "destination": "some other system",
    "messageid": "1",
    "payload": "<jsonObject><key1>value1</key1><key2>value2</key2></jsonObject>",
    "sequence": "Generic_FaultSeq_v1",
    "source": "some system"
  }
}

Configure log4j2.properties file (Micro Integrator)

Just as we did on Enterprise Integrator, with Micro Integrator we also need to configure how your logs are going to be formatted.
With the changes to the dependencies and the Carbon Kernel Logging Framework, downloading a third-party json layout did not work for MI4.1.0.

In this case, instead of a third-party option, upgrading the pax logging version from wso2v4 to wso2v5 did the trick, since wso2v5 includes log4j-layout-template-json and wso2v4 does not. You can download the jar files pax-logging-api-2.1.0-wso2v5 and pax-logging-log4j2-2.1.0-wso2v5 ,move them into {EI_HOME}/dropins folder

Example of log4j2.properties file:

appender.CARBON_CONSOLE.type = Console
appender.CARBON_CONSOLE.name = CARBON_CONSOLE
appender.CARBON_CONSOLE.layout = JsonTemplateLayout
appender.CARBON_CONSOLE.layout.eventTemplateUri = classpath:EcsLayout.json
appender.CARBON_CONSOLE.layout.locationInfoEnabled = true
appender.CARBON_CONSOLE.filter.threshold.type = ThreshholdFilter
appender.CARBON_CONSOLE.filter.threshold.level = DEBUG

With these configurations, we will call the jsonLog mediator, which we have just created above, and this call will output the following json console log:

{
  "@timestamp": "2022-11-21T03:59:00.542Z",
  "ecs.version": "1.2.0",
  "log.level": "INFO",
  "message": "{sequence=Generic_FaultSeq_v1, payload=<jsonObject><key1>value1</key1><key2>value2</key2></jsonObject>, destination=null, messageid=null, description=Start fault processing, correlationid=null, source=null}",
  "process.thread.name": "PassThroughMessageProcessor-1",
  "log.logger": "nl.rijksweb.mediator.JsonLogMediator",
  "labels": {
    "Correlation-ID": "2b530f10-e410-4cc7-878a-5dd42793aac2",
    "bundle.id": "255",
    "bundle.name": "synapse-core",
    "bundle.version": "2.1.7.wso2v271",
    "correlationid": "null",
    "description": "Start fault processing",
    "destination": "null",
    "messageid": "null",
    "payload": "<jsonObject><key1>value1</key1><key2>value2</key2></jsonObject>",
    "sequence": "Generic_FaultSeq_v1",
    "source": "null"
  }
}

Conclusion

Logging is extremely important for any and all enterprise software applications and WSO2 is not an exception. Logging the information from the moment it enters the system, to which mediation flow it takes, to how it gets out of the system may provide the system administrators and developers some critical information.

Using tools like ELK stack aggregates all your logs from multiple sources into a single, central place; which will help greatly in case of debugging an issue. But implementing it can come with its own challenges.
With the custom jsonLog mediator, you can output valid JSON objects as console (or file) output and bypass Logstash to avoid unnecessary serializing/de-serializing, which can be resource intensive.

Special thanks to Philip Akyempon and Steve Liem for their support.

I sincerely hope this information helps you with your projects, if there are any questions don’t hesitate to contact us.

eng
Close