Log files are telling you what is happening with the software that you are running. WSO2 products, like the Enterprise Integrator 6.6.0, use LOG4J2 to control the information that is shown on the logs.
LOG4J2 and LOG4J2 are de facto standard frameworks for logging application messages in Java. Basically, when you are developing Java applications, you include several statements that log information according to the level that you are setting.
Log levels in LOG4J2
The built-in log levels for Log4j2 are:
- OFF
- ALL
- FATAL
- ERROR
- WARN
- INFO
- DEBUG
- TRACE
Setting a specific class to a specific log level means that the logging framework will output all log statements up to that level for that particular class to the log files.
In source code, at locations that are of interest for the system administrator, statements can be placed that emit information for the log at a certain level. The LOG4J2 configuration then controls which of these log statements in source code will actually be output to the log files.
Log levels are cumulative and thus when setting a level of TRACE in the configuration you will receive all higher log levels (DEBUG,INFO,WARN, …) as well. Setting a level of ERROR however will NOT give you the level-outputs of lower levels. This allows a quite granular ability to output log statements throughout a class.
It does not matter if you are viewing your logs with a text-editor or have an ELK Stack deployed, the logs will tell you the truth.
If you are not that familiar with logs, when you start the Enterprise Integrator from the command line or terminal with the integrator.sh (or bat) command, the text that is shown on the console is what’s output from the logging framework and known as ‘the console logs’. On disk, in the wso2-home/repository/logs/wso2carbon.log you can find the persistent form of the logging that you can see, even when you don’t have access to the console where WSO2 Enterprise Integrator was started.
As you can see, we get a lot of INFO messages which we can ignore since it is telling us what it is doing. But not every message can be ignored. ERRORs of course indicate that we encountered an issue that may need to be addressed.
LOG4J2 file
The LOG4J2 file is found in the [EI-HOME]/conf directory. It is a hot file, meaning that you instantly see the effect of a change. First by a message on the console that the file is changed. And as soon as the specific piece of Java code is executed, the log mediators that apply.
So, what does the file look like?
It all starts with a list of appenders. An appender is responsible for sending log messages to a certain output destination. These destinations are often log files, but can also be a Linux syslog facility, a queue, or any other destination that can receive information.
In the Enterprise Integrator, we can find several of these appenders that are configured.
The first appender is the console. As you can see you will find the definition of the carbon console output type. Removing for instance the [%d] from the layout pattern, will omit the date time stamp from the output. Inserting it again reinstates the former situation.
Many appenders support specifying the syntax of a logline by using a pattern. Logs contain multiple pieces of information like the time at which the log-statement was executed, the loglevel of the log-line, the class that emitted the logline and the message that the class added to the logline. Using the pattern, you control which and in which order the attributes of the log statements are written to the appender.
There are more types of appenders than the rolling file appender and console, but these are the two that are used in the Enterprise Integrator. When you look at the configuration you can see that there is a daily rollover of the carbon log as well as a policy based on size.
It is advised to look at these settings so that they suit your requirements. Setting the rollover parameters too tight means that the log file will rollover quite often, and that may make lookup of information from it more tedious. Setting it too large can mean that the log file becomes too big for your tooling to be manageable and useful for you.
A bit below the appenders you can find the loggers that are defined in a list that you can see in the image above. A logger needs to be defined like the example that you see for org.apache.synapse as well as added to the list of loggers. When correctly defined, the logger will be activated and log messages can appear when triggered.
Adding your own loggers
You can add your own loggers to this overview by simply specifying the Java class that you would like to log. In Java classes are part of a package. A package contains zero or more classes. Packages are considered hierarchical, like a tree-structure.
A logger can be given a classname to become very specific or you can specify a package to be logged. Consequently, when specifying a package, the settings set on that logger-configuration will then apply to all classes within that package (and thus also for any classes in sub-packages). The higher the Java classes are, so for instance org.apache.synapse, the more lower lying classes are hit. There are for instance loggers defined like org.apache.synapse.mediators which only triggers the mediators within synapse and not org.apache.synapse.transport.http.headers.
Creating a new logger means identifying the class or package that you would like to log, in this case org.yenlo.example in creating the following three lines.
logger.org-yenlo-example.name=org.yenlo.example
logger.org-yenlo-example.level=DEBUG
logger.org-yenlo-example.appenderRef.CARBON_LOGFILE.ref = CARBON_LOGFILE
Lastly, we need to add it to the loggers list.
Now, when this class is used, and it was developed with LOG4J2 in mind, we will get messages.
Let’s increase the log level on a specific Java class.
Playing with LOG4J2
We will play around with logging a bit in this blog. In Enterprise Integrator 6.6.0 the logging is now done via the LOG4J2.properties file and not from the Management console anymore.
Warning: Take caution on a production system setting the log levels to anything more detailed than INFO (being DEBUG or TRACE), as that could make the logs a lot more verbose and cause impact on system performance. Log file size on disk may grow very rapidly in size depending on server load.
The LOG4J2.properties file can be found at [EI-HOME]/conf directory. Use a text editor to make the changes.
In general, it is considered handy to change/uncomment the following lines and/or adding lines that are not there. Set them to INFO because DEBUG will cause them to start logging detailed information about requests and responses coming into and out of WSO2 Enterprise Integrator and those being send out to and received from other systems.
Now add the:
- synapse-transport-http-headers
- synapse-transport-http-wire
- httpclient-wire-header
- httpclient-wire-content
to the logger list.
By changing these lines, we are able to turn request and response wire logging on and off.
Let’s test this logger configuration change.
Start WSO2 Enterprise Integrator if it is not yet running. Create a simple proxy, called ‘test’, using the Management Console of the Enterprise Integrator with the following fields:
Use the ‘Try it’ functionality of the Test Proxy to send a message into it.
See the console output or the wso2carbon.log file so see the log information that we’ve configured to be logged.
Set the log level of the logger.synapse-transport-http-wire and logger.synapse-transport-http-headers as well as logger.httpclient-wire-header and logger.httpclient-wire-content back to INFO level. Wait a moment to have the logging framework detect your change and send another message into the test-proxy.
You’ll see that the http-transport and wire related log messages have not been shown for your second try.
Logging configuration
We’ll change the log level of the mediators to see its effect when a message enters the system.
Let’s first create a very simple logging-echo proxy service that has one mediator.
Go to the main tab and click on “Add proxy service”.
This is the source code:
<?xml version="1.0" encoding="UTF-8"?>
<proxy name="logProxy" startOnLoad="true" transports="https http" xmlns="http://ws.apache.org/ns/synapse">
<target>
<inSequence>
<log>
<property expression="$ctx:SYSTEM_DATE" name="timetest"/>
</log>
<respond/>
</inSequence>
<outSequence/>
<faultSequence/>
</target>
</proxy>
Select “Custom proxy” as type and use “logProxy” as “Proxy Service Name”.
After filling in the name click on “Switch to source view” and replace the body with the source code shown above.
Click on save to finish creating the proxy.
Click “Try this service” of the logProxy so that we can send some data into the proxy.
In the try-it window supply a simple body.
Click “Send” to invoke the proxy service. Keep this browser tab open, you’ll need it later.
When you now inspect the logs in the console or wso2carbon.log file, you will see that there is a LogMediator line indicating that the proxy was invoked.
INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: /services/logProxy.logProxyHttpSoap12Endpoint, WSAction: urn:mediate, SOAPAction: urn:mediate, MessageID: urn:uuid:0cf3cc3f-3bb9-49c7-b754-06534f185e47, Direction: request {org.apache.synapse.mediators.builtin.LogMediator}
Making adjustments in LOG4J2
Open the LOG4J2 file and add a new logger like I showed a bit earlier. Add these lines:
logger.org-apache-synapse-mediators.name=org.apache.synapse.mediators
logger.org-apache-synapse-mediators.level=DEBUG
logger.org-apache-synapse-mediators.appenderRef.CARBON_LOGFILE.ref = CARBON_LOGFILE
And do not forget to add the logger to the loggers= list. Simply find the last entry, and add ‘ , org-apache-synapse-mediators’ to the end. Save the file.
Now invoke the try-it service again for the logProxy, you should still have it opened in your browser. If not, go back to this instruction to see how to open the try-it service.
When you now inspect the logfile, then you will see more detail in the log output of the mediators.
For instance, this line should appear:
[2021-06-03 15:35:46,253] DEBUG {org.apache.synapse.mediators.builtin.LogMediator} - Start : Log mediator
[2021-06-03 15:35:46,254] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: /services/logProxy.logProxyHttpSoap12Endpoint, WSAction: urn:mediate, SOAPAction: urn:mediate, MessageID: urn:uuid:43393d8a-f10f-4d2a-accf-fa6e5f8beea1, Direction: request, datetest = 6/3/21 3:35 PM
[2021-06-03 15:35:46,255] DEBUG {org.apache.synapse.mediators.builtin.LogMediator} - End : Log mediator
As you can see, we now have more information about the LogMediator that we just used. To be precise, about all mediators since we set the logging on mediators, rather than specifically one Log Mediator. I just have not made it visible.
Conclusion
Logging provides the system administrator or developers insight into what is happening inside the product. Log information, about what the product is doing when a message enters the system or logs that show which steps the mediation flow takes, is of high value for root-cause analysis.
At Yenlo we’ve developed several approaches on how to best log in our integrations, so that after the fact has happened, we can re-trace how a message was processed in the integration-layer.
Yenlo is the leading, global, multi-technology integration specialist in the field of API-management, Integration technology and Identity Management. Known for our strong focus on best-of-breed hybrid and cloud-based iPaaS technologies. Yenlo is the product leader and multi-award winner in WSO2, Boomi, MuleSoft and Microsoft Azure technologies and offers best-of-breed solutions from multiple leading integration vendors.
Without proper logging, the product is a blackbox in which we cannot see what happens in case of unexpected errors. Appropriate logging can make the product much more transparent about what it is doing and thus it is always a good idea to design logging into your integrations and configure the logging framework to log exactly what you want.
Using tools like the ELK-stack that can aggregate your logs from multiple sources into a single log-search tool, is what we consider a very powerful combination. For instance, we can build a dashboard based on business logs about sales in Kibana. But that might be a topic for another blog.