Monday, July 29, 2013

Compute Java Object Memory Footprint at runtime with JAMM (Java Agent for Memory Measurements)

This short article shows how to measure java object memory size at runtime with JAMM, a java agent dedicated to measure actual object memory use including JVM overhead.

JAMM uses the Instrumentation implementation provided by the JVM to compute memory usage of a given object by calling the getObjectSize(..) method.

It is quite simple to use, as explained by the author:

MemoryMeter meter = new MemoryMeter();
meter.measure(object);
meter.measureDeep(object);
meter.countChildren(object);

The only constraint is to attach JAMM java agent to the JVM before using it to measure memory usage, by starting the JVM with the -javaagent option pointing to the JAMM jar.

So, we are going to write a JUnit that shows how to use JAMM, but before we need to setup maven to achieve this:
    
        
            junit
            junit
            4.11
            test
        
    
        
        
            com.github.stephenc
            jamm
            0.2.5
            test
        
            
    
    
    
        
            
            
            
                org.apache.maven.plugins
                maven-dependency-plugin
                2.8
                
                    
                        copy-dependencies
                        generate-test-resources
                        
                            copy
                        
                        
                            
                                
                                    com.github.stephenc
                                    jamm
                                    0.2.5
                                    jar
                                    ${project.build.directory}
                                    jamm.jar
                                
                            
                        
                    
                
            
            
            
            
                org.apache.maven.plugins
                maven-surefire-plugin
                2.14
                
                    -javaagent:${project.build.directory}/jamm.jar
                
            
            
        
    

Next, write the following JUnit that explores the JAMM features:
package org.javabenchmark.memory;

import java.util.ArrayList;
import java.util.List;
import org.github.jamm.MemoryMeter;
import org.junit.Test;

public class MemoryMeterTest {

    private MemoryMeter meter = new MemoryMeter();
    
    @Test
    public void shouldMeasureMemoryUsage() {

        String st1 = "This is the string #1";
        measure(st1);
        
        String st2 = "This is the string #2 and it has more chars.";
        measure(st2);
        
        List aList = new ArrayList(0);
        measure(aList);
        
        aList.add(st1);
        measure(aList);
        
        aList.add(st2);
        measure(aList);
        
    }

    private void measure(Object anObject) {
        
        System.out.println("-----------------------------------");
        System.out.printf("size: %d bytes\n", meter.measure(anObject));
        System.out.printf("retained size: %d bytes\n", meter.measureDeep(anObject));
        System.out.printf("inner object count: %d\n", meter.countChildren(anObject));
    }
}

Running the test produces the following output on my computer:
-----------------------------------
size: 24 bytes
retained size: 88 bytes
inner object count: 2
-----------------------------------
size: 24 bytes
retained size: 128 bytes
inner object count: 2
-----------------------------------
size: 24 bytes
retained size: 40 bytes
inner object count: 2
-----------------------------------
size: 24 bytes
retained size: 136 bytes
inner object count: 4
-----------------------------------
size: 24 bytes
retained size: 264 bytes
inner object count: 6
To conclude, you can see how it is easy to monitor the memory usage of your objects. It is very handy when dealing with huge collections, or when using caches such as the ones provided by Guava or EHCache. That way you can setup trigger that alert when memory consumption is excessive.


UPDATE: 2013-07-30

To answer to the rxin's comment, i did a quick (trivial) test with a list of 1,000,000 random strings, that represents 116 Mo in memory.

For the test, i sized the heap with the following JVM options: -Xms256m -Xmx256m.

Below is the memory usage during the test:


From my point of view, the memory overhead introduced by JAMM is negligible in that simple test case, but notice that the measureDeep() method takes time (reflection is slow).


 
  

Sunday, May 19, 2013

Monitoring Memory Usage inside a Java Web Application with JSF, PrimeFaces and MemoryMXBean

This article explains how to monitor memory usage in your web application by requesting the MemoryMXBean and exposing collected metrics within a Primefaces LineChart component in a JSF page.

Principle

The monitoring of the memory usage involves:

The overall design is illustrated below:



  1. The LineChart component calls every minute the MonitorController instance.
  2. The MonitorController instance requests the MemoryMXBean instance to get the current memory usage.
  3. The MonitorController instance updates a CartesianChartModel instance by adding a memory usage snapshot.
  4. The CartesianChartModel instance is returned to the LineChart component that renders it inside a JSF page.

PrimeFaces LineChart

To enable PrimeFaces components into your web application, update your maven pom.xml with the following repository and dependency:
    
        
            primefaces-maven-repo
            PrimeFaces Maven Repository
            http://repository.primefaces.org
            default
        
    
        
            org.primefaces
            primefaces
            3.4.2
        
Next, inside a JSF page, add the following code to display the LineChart and make it polling the MonitorController every minute:
<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:h="http://java.sun.com/jsf/html"
      xmlns:f="http://java.sun.com/jsf/core"
      xmlns:p="http://primefaces.org/ui"
      xmlns:c="http://java.sun.com/jsp/jstl/core"
      xmlns:ui="http://java.sun.com/jsf/facelets">

    <h:head>
       ...
    </h:head>

    <h:body>
       ...
                <h:form id="form">  

                    <!-- polls every minute -->
                    <p:poll interval="60" update="memoryChart" />  

                    <!-- memory line chart -->
                    <p:lineChart id="memoryChart" value="#{monitorController.memoryModel}"
                                 legendPosition="ne" title="Memory Usage" style="height:300px;margin-top:20px"
                                 xaxisLabel="Minutes" yaxisLabel="Bytes" zoom="true"/>  

                </h:form>  
       ...

    </h:body>
</html>


MonitorController

The code corresponding to the monitor controller class is:
package org.javabenchmark;

import java.io.Serializable;
import java.lang.management.ManagementFactory;
import java.lang.management.MemoryMXBean;
import java.lang.management.MemoryUsage;
import javax.faces.bean.ManagedBean;
import javax.faces.bean.SessionScoped;
import org.primefaces.model.chart.CartesianChartModel;
import org.primefaces.model.chart.ChartSeries;
import org.slf4j.LoggerFactory;

/**
 * JSF Managed Bean to get JVM memory usage.
 *
 */
@ManagedBean
@SessionScoped
public class MonitorController implements Serializable {

    /**
     * memory model.
     */
    private CartesianChartModel memoryModel;
    /**
     * memory usage series.
     */
    private ChartSeries memoryUsageSerie, maxMemorySerie;
    /**
     * elapsed time in minutes since the monitoring starts
     */
    private long elapsedTime = -1;

    /**
     * instantiates a new monitoring controller.
     */
    public MonitorController() {
        
        createMemoryModel();
    }
    
    /**
     * initializes memory usage model.
     */
    private void createMemoryModel() {  
        // model
        memoryModel = new CartesianChartModel(); 
        // heap serie
        memoryUsageSerie = new ChartSeries();  
        memoryUsageSerie.setLabel("Heap");
        memoryModel.addSeries(memoryUsageSerie);
        // max serie
        maxMemorySerie = new ChartSeries();  
        maxMemorySerie.setLabel("Max");
        memoryModel.addSeries(maxMemorySerie); 
    }
    
    /**
     * gets the memory model that will be rendered within a LineChart
     * component. The LineChart component should call this method every minute.
     * @return the updated memory model
     */
    public CartesianChartModel getMemoryModel() {  
        
        // gets data
        MemoryMXBean memoryMxBean = ManagementFactory.getMemoryMXBean();
        MemoryUsage memoryUsage = memoryMxBean.getHeapMemoryUsage();
        
        // one more minute
        elapsedTime++;
        
        // updates series
        memoryUsageSerie.set(elapsedTime, memoryUsage.getUsed());
        maxMemorySerie.set(elapsedTime, memoryUsage.getMax()); 
        
        return memoryModel;  
    }
}


Screenshot

Finally, a screenshot of the PrimeFaces line chart representing the memory usage :


Notice that it is possible to zoom with drag and drop, if you have a lot of points displayed.

Conclusion

As you can see, it is quite easy to monitor the JVM and to display collected metrics within a JSF page. You can go further by monitoring CPU usage for instance, or by adding some features like starting/stopping/resetting the monitoring.

Sunday, May 5, 2013

Java instrumentation tutorial

This article explains how Java instrumentation works, and how you can write your own agent to do some basic profiling/tracing.

Overview

JVM instrumentation feature was introduced in JDK 1.5 and is based on byte code instrumentation (BCI). Actually, when a class is loaded, you can alter the corresponding byte code to introduce features such as methods execution profiling or event tracing. Most of Java Application Performance Management (APM) solutions use this mechanism to monitor JVM.

Instrumentation Agent

To enable JVM instrumentation,  you have to provide an agent (or more) that is deployed as a JAR file. An attribute in the JAR file manifest specifies the agent class which will be loaded to start the agent.

There is 2 ways to load the agent:
  • with a command-line interfaceby adding this option to the command-line: -javaagent:jarpath[=options] where jarpath is the path to the agent JAR file. options is the agent options. This switch may be used multiple times on the same command-line, thus creating multiple agents. More than one agent may use the same jarpath.
  • by dynamic loading: the JVM must implement a mechanism to start agents sometime after the the VM has started. That way, a tool can "attach" an agent to a running JVM (for instance profilers or ByteMan)


After the JVM has initializedthe agent class will be loaded by the system class loader. If the class loader fails to load the agent, the JVM will abort.



Next, the JVM instantiates an Instrumentation interface implementation and given the context, tries to invoke one of the two methods that an agent must implement: premain or agentmain.

The premain method is invoked after JVM initialization and the agentmain method is invoked sometime after the JVM has started (if the JVM provides such a mechanism). When the agent is started using a command-line option (with -javaagent), the agentmain method is not invokedThe agent class may also have a premain method for use when the agent is started using a command-line option. When the agent is started after JVM startup the premain method is not invoked.




The signatures of the premain method are:
public static void premain(String agentArgs, Instrumentation inst);
public static void premain(String agentArgs);

And the signatures of the agentmain method are:
public static void agentmain(String agentArgs, Instrumentation inst);
public static void agentmain(String agentArgs);
The agent needs to implement only one signature per method.  The JVM first attempts to invoke the first signature, and if the agent class does not implement it then the JVM will attempt to invoke the second signature.

Byte Code Instrumentation

With the premain and agentmain methods, the agent can register a ClassFileTransformer instance by providing an implementation of this interface in order to transform class files. To register the transformer, the agent can use the addTransformer method of the given Instrumentation instance.


Now, all future class definitions will be seen by the transformer, except definitions of classes upon which any registered transformer is dependent. The transformer is called when classes are loaded, when they are redefined. and optionally, when they are retransformed (if the transformer was added to the instrumentation instance with the boolean canRetransform set to true).

The following method of the ClassFileTransformer interface is responsible of any class file transformation:
byte[] transform(ClassLoader loader, String className, Class<?> classBeingRedefined,
                 ProtectionDomain protectionDomain, byte[] classfileBuffer);

This is where things get complicated because you need to manipulate raw byte code. To achieve this, i strongly suggest to rely on dedicated tools such as ASM or Javassist.

Basic Profiling

To illustrate how the class file transformer can be used, we are going to set up some basic method profiling with Javassist:
  1. Write a trivial class that will be instrumented
  2. Write a ClassFileTransformer to inject some code to print method execution time
  3. Write an agent that registers the previous transformer
  4. Write the corresponding JUnit test
The class to be instrumented is:
package org.javabenchmark.instrumentation;

public class Sleeping {
    
    public void randomSleep() throws InterruptedException {
        
        // randomly sleeps between 500ms and 1200s
        long randomSleepDuration = (long) (500 + Math.random() * 700);
        System.out.printf("Sleeping for %d ms ..\n", randomSleepDuration);
        Thread.sleep(randomSleepDuration);
    }
}

The transformer class is:
package org.javabenchmark.instrumentation;

import java.lang.instrument.ClassFileTransformer;
import java.lang.instrument.IllegalClassFormatException;
import java.security.ProtectionDomain;
import javassist.ClassPool;
import javassist.CtClass;
import javassist.CtMethod;

public class SleepingClassFileTransformer implements ClassFileTransformer {

    public byte[] transform(ClassLoader loader, String className, Class classBeingRedefined,
        ProtectionDomain protectionDomain, byte[] classfileBuffer) throws IllegalClassFormatException {

        byte[] byteCode = classfileBuffer;

        if (className.equals("org/javabenchmark/instrumentation/Sleeping")) {

            try {
                ClassPool cp = ClassPool.getDefault();
                CtClass cc = cp.get("org.javabenchmark.instrumentation.Sleeping");
                CtMethod m = cc.getDeclaredMethod("randomSleep");
                m.addLocalVariable("elapsedTime", CtClass.longType);
                m.insertBefore("elapsedTime = System.currentTimeMillis();");
                m.insertAfter("{elapsedTime = System.currentTimeMillis() - elapsedTime;"
                        + "System.out.println(\"Method Executed in ms: \" + elapsedTime);}");
                byteCode = cc.toBytecode();
                cc.detach();
            } catch (Exception ex) {
                ex.printStackTrace();
            }
        }

        return byteCode;
    }
}

The transformer checks if the class to transform is the Sleeping one, then it injects the code that prints the time elapsed by the execution of the randomSleep() method.

The agent class is:
package org.javabenchmark.instrumentation;

import java.lang.instrument.Instrumentation;

public class Agent {

    public static void premain(String agentArgs, Instrumentation inst) {
        
        // registers the transformer
        inst.addTransformer(new SleepingClassFileTransformer());
    }
}
And finally, the corresponding JUnit code:
package org.javabenchmark.instrumentation;

import org.junit.Test;

public class AgentTest {

    @Test
    public void shouldInstantiateSleepingInstance() throws InterruptedException {

        Sleeping sleeping = new Sleeping();
        sleeping.randomSleep();
    }
}
But, before executing the test you need to setup Maven to enable the JVM agent:
  • build the JAR that contains the agent's code before running test
  • add the JVM option -javagent to the JVM that runs the test
  • add the Javassist dependency
To achieve this, add the following XML code to your pom.xml file, inside the <build><plugins> ... </plugins></build> section:
            
            
                org.apache.maven.plugins
                maven-jar-plugin
                2.4
                
                    
                        process-classes
                        
                            jar
                        
                        
                            
                                
                                    org.javabenchmark.instrumentation.Agent
                                
                            
                        
                    
                
            

            
            
                org.apache.maven.plugins
                maven-surefire-plugin
                2.14
                
                    -javaagent:target/${project.build.finalName}.jar
                
            
Next, add the Javassist dependency:
        
            org.javassist
            javassist
            3.14.0-GA
            jar
        

Then, running the test should produce something like this:
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.javabenchmark.instrumentation.AgentTest
Sleeping for 818 ms ..
Method Executed in ms: 820
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.001 sec
You can see that there is a new trace: Method Executed in ms: 820 proving that instrumentation works. Without instrumentation, you would only have the Sleeping for 818 ms .. trace.

Conclusion

It is easy to profile your code with the instrumentation API and the Javassist API: write transformers with Javassist, write an agent to register them, then use the -javaagent option and you're done !

Sunday, April 21, 2013

Fault injection in your JUnit with ByteMan

This article shows how to use ByteMan from JBoss to inject fault when testing Java code.

As usual, let us start with some trivial code to test: a simple class that appends strings to a given file.
package org.javabenchmark.byteman;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.logging.Level;
import java.util.logging.Logger;

/**
 * Logging helper class.
 *
 */
public class LogHelper {

    public static final String LOG_FILE_PROPERTY = "logHelperFile";
    public static final String DEFAULT_LOG_FILE = "sample.log";
    private static final File LOG_FILE = new File(System.getProperty(LOG_FILE_PROPERTY, DEFAULT_LOG_FILE));

    /**
     * logs the given trace to a dedicated file.
     *
     * @param trace a string to be written into the log file.
     */
    public static void log(String trace) {

        try {
            FileOutputStream fos = new FileOutputStream(LOG_FILE, true);
            fos.write(trace.getBytes());
            fos.write('\n');
            fos.flush();
            fos.close();
        } catch (IOException ex) {
            final String msg = "Log Helper can't write trace to the log file: " + LOG_FILE.getAbsolutePath();
            System.err.println(msg);
        }
    }
}


The next step is to write the corresponding test:
package org.javabenchmark.byteman;

import java.io.File;
import static org.fest.assertions.api.Assertions.*;
import org.jboss.byteman.contrib.bmunit.BMRule;
import org.jboss.byteman.contrib.bmunit.BMUnitRunner;
import org.junit.Test;
import org.junit.runner.RunWith;

/**
 * Unit Test dedicated to LogHelper.
 */
public class LogHelperTest {

    private static final String SUCCESSFULL_TEST_LABEL = "Successfull Test";

    private void deleteFile(File file) {
        // clean
        file.delete();
        assertThat(file).doesNotExist();
    }
    
    @Test
    public void shouldLog() {

        File logFile = new File(LogHelper.DEFAULT_LOG_FILE);

        // clean
        deleteFile(logFile);

        // one line test
        LogHelper.log(SUCCESSFULL_TEST_LABEL);

        // control
        assertThat(logFile).exists();
        assertThat(logFile).hasContent(SUCCESSFULL_TEST_LABEL);

        // several lines test
        LogHelper.log(SUCCESSFULL_TEST_LABEL);
        LogHelper.log(SUCCESSFULL_TEST_LABEL);

        // control
        assertThat(logFile).hasContent(SUCCESSFULL_TEST_LABEL + '\n' + SUCCESSFULL_TEST_LABEL + '\n' + SUCCESSFULL_TEST_LABEL);
        deleteFile(logFile);
    }
}

Running the test produces the following output:
Testsuite: org.javabenchmark.byteman.LogHelperTest
Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.136 sec

And if you enable code coverage in your favorite IDE, you should have something like this:

Red regions show us parts of the code that are not covered by the test. In our case, we can see that the code that handles IO exception is never invoked. Then, it is not possible to cover all the code of the method without raising an IOException. Hopefully, there is a powerful tool for this: ByteMan.

As explained by the authors:
Byteman is a tool which simplifies tracing and testing of Java programs. Byteman allows you to insert extra Java code into your application, either as it is loaded during JVM startup or even after it has already started running.

Actually, ByteMan is a java agent that does instrumentation and we are going to use it to inject IO exception during the execution of our test to check the code that deals with it. The idea is to throw an IOException when the method FileOutputStream.write(byte[]) is called.

Firstly, the dependency for ByteMan:

  org.jboss.byteman
  byteman-bmunit
  2.1.2


Secondly, modify the previous test to add ByteMan to our JUnit and write a new method to test the code that handles IOException:
package org.javabenchmark.byteman;

import java.io.File;
import static org.fest.assertions.api.Assertions.*;
import org.jboss.byteman.contrib.bmunit.BMRule;
import org.jboss.byteman.contrib.bmunit.BMUnitRunner;
import org.junit.Test;
import org.junit.runner.RunWith;

/**
 * Unit Test dedicated to LogHelper.
 */
@RunWith(BMUnitRunner.class)
public class LogHelperTest {

    private static final String SUCCESSFULL_TEST_LABEL = "Successfull Test";

    private void deleteFile(File file) {
        // ...
    }
    
    @Test
    public void shouldLog() {
        // ...
    }

    @Test
    @BMRule(name = "IOException Rule",
            targetClass = "FileOutputStream",
            targetMethod = "write(byte[])",
            targetLocation = "AT ENTRY",
            condition = "TRUE",
            action = "throw new java.io.IOException()")
    public void shouldHandleIOException() {
        
        File logFile = new File(LogHelper.DEFAULT_LOG_FILE);

        // clean
        deleteFile(logFile);

        // one line test that should raise an io exception
        LogHelper.log(SUCCESSFULL_TEST_LABEL);
        
        // control
        assertThat(logFile).exists();
        assertThat(logFile).hasContent("");
        
        // clean
        deleteFile(logFile);
    }
}


The BMRule annotation defines the execution context for ByteMan:
  • name: to set the name of the rule
  • targetClass and targetMethod: to set the method that must be instrumented
  • targetLocation: to set when to inject  the code (at the beginning, at the end or somewhere in the body of the method)
  • condition: to set some condition to be met
  • action: the code to be injected
So, we have a rule named "IOException Rule" that is activated when the method write(byte[]) of the FileOutputStream class begins. The rule then injects a given code that throws an IOException. When the shouldHandleIOException test runs, every time that write(byte[]) is called then a java.io.IOException is raised.

Now, running all tests produces:
Testsuite: org.javabenchmark.byteman.LogHelperTest
byteman jar is /home/javabenchmark/.m2/repository/org/jboss/byteman/byteman/2.1.2/byteman-2.1.2.jar
Log Helper can't write trace to the log file: /home/javabenchmark/Documents/projects/javabenchmark/sample.log
Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.534 sec

And the code coverage looks like this:

The log(String) method is now fully covered (the implicit constructor of the LogHelper class is not tested, explaining the missing percents to reach 100%).

Conclusion

ByteMan is very useful to test code that deals with Exception, especially fault tolerant one. That way, you can check that your fault tolerance strategy is really resilient, or that your code gives you relevant traces when errors occur.

Thursday, April 11, 2013

Check that your code is thread-safe with JUnit and ContiPerf

This short article shows how to check that you write thread-safe code by transforming a JUnit test into a concurrent one with ContiPerf.

Suppose that you have to test a date formatter component that uses the famous SimpleDateFormat class, as illustrated below:
package org.javabenchmark;

import java.text.SimpleDateFormat;
import java.util.Date;

/**
 * Helper dedicated to format date in a standard way.
 */
public class NonThreadSafeDateFormatHelper {

    /**
     * the date format for standard representation.
     */
    private SimpleDateFormat standardDateFormat = new SimpleDateFormat("yyyy-MM-dd");

    /**
     * formats the given date using the standard date format: yyyy-MM-dd.
     *
     * @param date the date to format
     * @return a literal representation of the given date.
     */
    public String toStandardString(Date date) {
        return standardDateFormat.format(date);
    }
}

Then, you could write a junit test like this:
package org.javabenchmark;

import org.junit.Test;

import java.util.Calendar;

import static org.fest.assertions.api.Assertions.*;

public class NonThreadSafeDateFormatHelperTest {

    @Test
    public void shouldFormatRandomDate() {

        // random date
        Calendar calendar = Calendar.getInstance();
        calendar.set(Calendar.YEAR, (int) (1000 + Math.random() * 1000));
        calendar.set(Calendar.DAY_OF_YEAR, (int) (Math.random() * 365));

        // test
        NonThreadSafeDateFormatHelper dateFormatHelperToTest = new NonThreadSafeDateFormatHelper();
        String randomDateString = dateFormatHelperToTest.toStandardString(calendar.getTime());

        // general controls
        assertThat(randomDateString).isNotNull();
        assertThat(randomDateString).hasSize(10);
        // year control
        String literalYear = String.valueOf(calendar.get(Calendar.YEAR));
        assertThat(literalYear).isEqualTo(randomDateString.substring(0, 4));
        // month control
        String literalMonth = String.valueOf(calendar.get(Calendar.MONTH) + 1);
        if (literalMonth.length() == 1) {
            literalMonth = "0" + literalMonth;
        }
        assertThat(literalMonth).isEqualTo(randomDateString.substring(5, 7));
        // day control
        String literalDayh = String.valueOf(calendar.get(Calendar.DAY_OF_MONTH));
        if (literalDayh.length() == 1) {
            literalDayh = "0" + literalDayh;
        }
        assertThat(literalDayh).isEqualTo(randomDateString.substring(8));
    }
}

Next, running the test will produce the following output:
Testsuite: org.javabenchmark.NonThreadSafeDateFormatHelperTest
Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.173 sec

The test passes, and you could think that everything is fine, but what will happen if the DateFormatHelper class is used in a concurrent way, for instance from a JSF page to display the current date ?

To check that your code can handle concurrency, you can modify the previous JUnit test like this:
package org.javabenchmark;

import org.junit.Rule;
import org.junit.Test;

import java.text.ParseException;
import java.util.Calendar;
import org.databene.contiperf.PerfTest;
import org.databene.contiperf.junit.ContiPerfRule;

import static org.fest.assertions.api.Assertions.assertThat;

public class NonThreadSafeDateFormatHelperPerfTest {

    @Rule
    public ContiPerfRule i = new ContiPerfRule();
    /**
     * the date format helper to test.
     */
    private NonThreadSafeDateFormatHelper dateFormatHelperToTest = new NonThreadSafeDateFormatHelper();

    @Test
    @PerfTest(invocations = 1000, threads = 2)
    public void shouldFormatRandomDatesConcurrently() throws ParseException {

        // random date
        Calendar calendar = Calendar.getInstance();
        calendar.set(Calendar.YEAR, (int) (1000 + Math.random() * 1013));
        calendar.set(Calendar.DAY_OF_YEAR, (int) (Math.random() * 365));
        
        // test
        String randomDateString = dateFormatHelperToTest.toStandardString(calendar.getTime());

        // general controls
        assertThat(randomDateString).isNotNull();
        assertThat(randomDateString).hasSize(10);
        // year control
        String literalYear = String.valueOf(calendar.get(Calendar.YEAR));
        assertThat(literalYear).isEqualTo(randomDateString.substring(0, 4));
        // month control
        String literalMonth = String.valueOf(calendar.get(Calendar.MONTH) + 1);
        if (literalMonth.length() == 1) {
            literalMonth = "0" + literalMonth;
        }
        assertThat(literalMonth).isEqualTo(randomDateString.substring(5, 7));
        // day control
        String literalDayh = String.valueOf(calendar.get(Calendar.DAY_OF_MONTH));
        if (literalDayh.length() == 1) {
            literalDayh = "0" + literalDayh;
        }
        assertThat(literalDayh).isEqualTo(randomDateString.substring(8));
    }
}

This is the same test, except that the test method is invoked 1000 times by 2 threads, producing this output:
Testsuite: org.javabenchmark.NonThreadSafeDateFormatHelperPerfTest
org.javabenchmark.NonThreadSafeDateFormatHelperPerfTest.shouldFormatRandomDatesConcurrently
samples: 999
max:     19
average: 0.04804804804804805
median:  0
Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0.377 sec
So, there is now an error, indicating that the DateFormatHelper component is not thread-safe: do not let it go into production :)

Summary

Good unit tests are not sufficient when you are writing components that will evolve in multi-threaded environment, like web applications. You can easily check if your code is vulnerable to race condition with contiperf.

Sunday, April 7, 2013

JVM Biased Locking and micro benchmark

This is a short article that focuses on HotSpot JVM Biased Locking feature. Before explaining what is it, let's take a simple example:
package org.javabenchmark;

public class SynchronizeCounter {
    
    private static long count = 0l;
    
    public synchronized void increment() {
        count = count + 1;
    }
    
    public long getCount() {
        return count;
    }
}
This class manages a thread safe counter. The code uses the synchronized keyword to ensure that only one thread at a time can invoke the increment() method.

Now, a trivial micro benchmark with only one thread incrementing the counter:
package org.javabenchmark;

import org.junit.Test;

public class OneThreadCounterTest {

    @Test
    public void incrementCounterLoop() {
        
        SynchronizeCounter counter = new SynchronizeCounter();
        long time = System.currentTimeMillis();
        for (int i = 0; i < 10000000; i++) {
            counter.increment();
        }
        time = System.currentTimeMillis() - time;
        System.out.printf("Counter: %d - Elapsed time: %d ms\n", counter.getCount(), time);
    }
}

The output is: Counter: 10000000 - Elapsed time: 67 ms

Next, the same test but with a 4.5s wait before:
package org.javabenchmark;

import org.junit.Test;

public class OneThreadCounterTest {

    @Test
    public void incrementCounterLoop() throws InterruptedException {

        Thread.sleep(4500);
        
        SynchronizeCounter counter = new SynchronizeCounter();
        long time = System.currentTimeMillis();
        for (int i = 0; i < 10000000; i++) {
            counter.increment();
        }
        time = System.currentTimeMillis() - time;
        System.out.printf("Counter: %d - Elapsed time: %d ms\n", counter.getCount(), time);
    }
}

The output is: Counter: 10000000 - Elapsed time: 12 ms
Wow, that is a big improvement considering that the only difference between the two tests is the line with Thread.sleep(..).
So, what happened ?

In the second case, the code benefits from the biased locking: we are in the context of un-contended lock, meaning that a lock mechanism is used but without any concurrency (single thread). In that case, the JVM uses biased locking and the running thread does not have to rely on expensive atomic instructions to acquire the lock.

Biased locking is enabled by default but only after 4 seconds, that is why the second test benefits from it. You can suppress this 4 seconds wait with the following JVM option:
-XX:BiasedLockingStartupDelay=0

Conclusion

When dealing with un-contended micro-benchmark be aware of the biased locking startup delay and use the previous JVM option to disabled it.

Saturday, April 6, 2013

Web Services Performance Testing with JMeter

This article explains how to measure the performance of your JEE6 web services with JMeter.

Web Service

The web service is a trivial one, that echoes its text input as illustrated by the following code:
package org.javabenchmark;

import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebService;

@WebService(serviceName = "EchoWebService")
public class EchoWebService {

    @WebMethod
    public String echo(@WebParam(name="text") String text) {
        return "Echoing: " + text;
    }
}

Use your favorite JEE6 application server (Glassfish, JBoss, TomEE, Weblogic, WebSphere, ..) to deploy a simple application that exposes the previous web service (this step is beyond the scope of this article).

The endpoint of the service can be constructed by concatenating the application's context and the name of the service. For instance, if the application is deployed to http://localhost:8080/jee6, then the web service endpoint will be: http://localhost:8080/jee6/EchoWebService

JMeter Installation

First, you need to download JMeter there. Next, you need a mandatory JMeter plugin : jmeter-plugins.

Unzip the JMeter archive. Copy the JMeterPlugins.jar file of the plugin archive in the lib/ext directory of your JMeter installation directory.

Test Plan

Now, you are ready to generate some SOAP requests. To achieve this, launch JMeter, right click on the Test Plan icon and choose Add > Threads (Users) > jp@gc - Stepping Thread Group :




Then, set the number of threads to 30, to enable 30 concurrent threads as illustrated below:


So, the performance test will start 10 threads, then waits 30 seconds, and starts again 10 threads until reaching 30 threads. Then it will hold the load for 1 minute.

To invoke the web service we need to set the content type and a SOAP envelope to call the service.

First, add a HTTP Header Manager to set the content type : right-click on the Stepping thread group and choose Add > Config ElementHTTP Header Manager.



Then, click the Add button at the bottom and fill the first row of the HTTP Header table with the following values :
  • Name column: content-type
  • Value column: text/xml





To execute the call to the service web, a HTTP request sampler is needed : right-click on the Stepping thread group and choose Add > Sampler > HTTP Request.




Next, configure the HTTP Request sampler like this:

  • impl√©mentation : HttpClient4
  • Method: POST
  • Path: http://localhost:8080/jee6/EchoWebService (replace with your endpoint address)
  • Post Body:





TEXT ${__counter(FALSE)}




Replace the previous XML with the one corresponding to the web service you are testing. You can use the WSDL and the XSD provided by the service. In our case, both are given with the following addresses:
(adapt those URLs with the ones provided by your application server)





To check that everything goes right, add a View Results Tree sampler : right-click on the Stepping thread group and choose Add > Sampler > View Results Tree.



So, the next step is to verify that the test plan runs correctly and that the web service is responding (your web service must be up and running):

  • Click the start button:

  • And check in the view results tree that every request succeeds:



EDIT: Do not use the View Results Tree during the load test. Just click the Error check-box or disable the Results Tree.

Everything is fine, so it is time to collect some metrics about the performance of the targeted web and to achieve this, we need to add some listener to our test plan:

  • Response time: right-click on the Stepping thread group and choose Add > Listener > jp@gc - Response Times Over Time.
  • Request per second: right-click on the Stepping thread group and choose Add > Listener > jp@gc - Transactions per Second.
  • Response time's Percentiles: right-click on the Stepping thread group and choose Add > Listener > jp@gc - Response Times Percentiles.

It is up to you to pick up from the numerous listeners available !

Monitoring

The last step is to start again the test plan and to monitor graphs provided by the 3 listeners previously added:

Response times
Requests per second

Response times percentiles

Saturday, March 9, 2013

Monitor OS pauses with jHiccup when benchmarking Java App

Benchmarking java code implies to be sure that the underlying platform does not impact the performance of the JVM in any way. Hopefully, there is a tool that monitors pauses and stalls that may occur at platform level: jHiccup from Azul.



As explained on the Azul's site, jHiccup captures the aggregate effects of the Java Virtual Machine (JVM), operating system, hypervisor (if used) and hardware on application stalls and response time.

This tool records platform pause/stall into log files that you can import into a MS Excel file:


Next you can plot platform "hiccup" and percentiles as illustrated below:


chart from MS Excel

This chart shows that during a 25 seconds benchmark, 99% of the latencies collected are lesser than 300 milliseconds, with a peak of half a second ! Such values indicate that we are not in the case of a dedicated/quiet system, and there must be some background activities (actually, i ran this benchmark during a Windows 7 user session with mail, browser and other usual softwares).

As you can guess, such a tool is really useful to check:
  • that your JVM is running on a dedicated or quiet system
  • that application response time behavior are not influenced by the platform
Running jHiccup is quite simple, use the provided script to launch your JVM on Unix environment (e.g jHiccup /usr/bin/java <Java args> -jar UsefulProgram.jar -a -b -c ) or as an instrumentation agent in other cases (e.g -javaagent:D:\apps\jHiccup.1.2.2\bin\jHiccup.jar="-d 0 -i 1000 -l hiccuplog -c c.hiccuplog") and you are done !


Thursday, March 7, 2013

Continuous Performance and JUnit with ContiPerf

If you want to introduce some continuous performance into your Java project, you should definitively check the ContiPerf library : that tool transforms your unit tests into performance tests simply by adding few annotations !

For instance, suppose your need to do some XML serialization with JAXB as illustrated below and therefore you have to test the UserSerializer class.


The corresponding code for the UserSerializer class is:
package org.javabenchmark;

import java.io.File;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Marshaller;

public class UserSerializer {

    public void serializeUserToXmlFile(User user, File file) throws JAXBException {

        JAXBContext jaxbContext = JAXBContext.newInstance(User.class);
        Marshaller jaxbMarshaller = jaxbContext.createMarshaller();
        jaxbMarshaller.marshal(user, file);
    }
}
and the corresponding code for the UserSerializer test would be:
package org.javabenchmark;

import java.io.File;
import java.util.UUID;
import javax.xml.bind.JAXBException;
import org.junit.Test;
import static org.fest.assertions.api.Assertions.*;

public class UserSerializerTest {
    
    @Test
    public void shouldSerializeRandomUser() throws JAXBException {
        
        // given a random user
        String id = UUID.randomUUID().toString();
        String firstName = "Julien";
        String lastName = "Paoletti";
        
        User randomUser = new User();
        randomUser.setId(id);
        randomUser.setFirstName(firstName);
        randomUser.setLastName(lastName);
        
        // when serializing it to a random XML file
        File xmlFile = new File("target/" + id + ".xml");
        xmlFile.deleteOnExit();
        
        UserSerializer serializer = new UserSerializer();
        serializer.serializeUserToXmlFile(randomUser, xmlFile);
        
        // then the file contains the corresponding XML
        String validXmlContent = ""
                + ""+ firstName + ""
                + ""+ lastName + ""
                + "";
        assertThat(xmlFile).hasContent(validXmlContent);
    }
}

The test passes,  it is now time to transform this unit test into a performance one in 2 steps:
  1. add a JUnit rule from ContiPerf
  2. add a ContiPerf annotation to configure the execution of the performance test

public class UserSerializerTest {

    @Rule
    public ContiPerfRule i = new ContiPerfRule();
    
    @Test
    @PerfTest(invocations = 200, threads = 2)
    public void shouldSerializeRandomUser() throws JAXBException {

        // ...
    }
}
As you can see, the test will be exectuted 200 times by 2 threads. On my computer (Ubuntu VM 5Go 4 CPU), running the test produces the following output:

org.javabenchmark.UserSerializerTest.shouldSerializeRandomUser
samples: 200
max:     140
average: 6.79
median:  4

You would think that a simple XML serialization could be faster (almost 7 ms on average). To get some clue, let's profile the performance test with VisualVM :


Ouch, 95% of the execution time is taken by the JAXBContext method newInstance(). How to improve this ? A quick search on the internet shows that the JAXBContext is actually thread-safe, so it is not necessary to instantiate it each time.

The previous code can be improved:
public class UserSerializer {
    
    private static JAXBContext jaxbContext;
    
    static {
        try {
            jaxbContext = JAXBContext.newInstance(User.class);
        } catch (JAXBException ex) {
            // handles exception
            // ...
        }
    }

    public void serializeUserToXmlFile(User user, File file) throws JAXBException {
        Marshaller jaxbMarshaller = jaxbContext.createMarshaller();
        jaxbMarshaller.marshal(user, file);
    }
}

Now, running the test again produces the output:

org.javabenchmark.UserSerializerTest.shouldSerializeRandomUser
samples: 200
max:     133
average: 1.4
median:  0


It is much better and then we can add performance checks to our test. To achieve this, ContiPerf provides the @Required annotation:
  • @Required(throughput = 20) : Requires to have at least 20 test executions per second 
  • @Required(average = 50): Requires an average execution time of not more than 50 milliseconds
  • @Required(median = 45): Requires that 50% of all executions do not take longer than 45 milliseconds
  • @Required(max = 2000): Requires that no invocation takes more than 2000 milliseconds (2 seconds)
  • @Required(totalTime = 5000): Requires that the sum of all execution times is not more than 5000 milliseconds (5 seconds)
  • @Required(percentile90 = 3000): Requires that 90% of all executions do not take longer than 3000 milliseconds
  • @Required(percentile95 = 5000): Requires that 95% of all executions do not take longer than 5000 milliseconds
  • @Required(percentile99 = 10000): Requires that 99% of all executions do not take longer than 10000 milliseconds
  • @Required(percentiles = "66:200,96:500"): Requires that 66% of all executions do not take longer than 200 milliseconds and 96% of all executions do not take longer than 500 milliseconds

In our case, we could check that the average does not exceed 2 ms and that 95% of all executions do not take longer that 1 ms:
public class UserSerializerTest {

    @Rule
    public ContiPerfRule i = new ContiPerfRule();

    @Test
    @PerfTest(invocations = 200, threads = 2)
    @Required(average = 2, percentile95 = 1)
    public void shouldSerializeRandomUser() throws JAXBException {
        // ...
    }
}

Conclusion
To enable continuous performance in your test:
  1. write a JUnit test that passes
  2. transform it into a performance one
  3. ensure that performance is fine
  4. add your performance requirements

Saturday, March 2, 2013

Write your own profiler with JEE6 interceptor

JEE6 interceptors are powerful components that interpose on method invocations and then provide an easy way to execute some given code just before and after method execution.

It is then trivial to build a basic profiler that would consist in starting a stopwatch before some method execution and stopping it just after the end. This can be easily done with a custom annotation to identify methods that are eligible to profiling and an interceptor to handle stopwatch.

The first step is to create the @Profiled annotation:
/**
 * The profiled annotation. Use it to mark methods that must be profiled.
 */
@Inherited
@InterceptorBinding
@Retention(RUNTIME)
@Target({METHOD, TYPE})
public @interface Profiled {
}

The next step is to write the interceptor that deals with methods that have the @Profiled annotation:
/**
 * An interceptor for profiling methods that have the
 * @Profiled annotation.
 */
@Profiled
@Interceptor
public class BasicProfiledMethodInterceptor {

    private static final Logger LOGGER = LoggerFactory.getLogger(BasicProfiledMethodInterceptor.class);

    /**
     * starts a stopwatch before a given method execution and stops it after the
     * method is proceeded. Then logs the elapsed time.
     *
     * @param context the invocation context.
     * @return the object produced by the method execution.
     * @throws Exception if an error occurs.
     */
    @AroundInvoke
    public Object profile(InvocationContext context) throws Exception {

        // starts the stopwatch
        final long startTime = System.currentTimeMillis();

        // executes the method that is profiled
        Object o = context.proceed();

        // stops the stopwatch and computes elapsed time
        final long elapsedTime = System.currentTimeMillis() - startTime;

        // logs method and time
        final String methodSignature = context.getMethod().toString();
        LOGGER.info("Profiled Method: {} in {} ms", methodSignature, elapsedTime);

        return o;
    }
}

Notice that logging is done with SLF4J and Logback in this example, but it is up to you to adapt the previous code to use your favorite logging library.

Now, to enable the interceptor, you must put a beans.xml file into the WEB-INF directory of your enterprise application (or META-INF directory with EJB modules or JAR), with the following content:
<?xml version="1.0" encoding="UTF-8"?>

    
        org.javabenchmark.BasicProfiledMethodInterceptor
    


Let's go now with a more sophisticated interceptor, that relies on Metrics (for monitoring) and Guava (for caching monitors) : the idea is to manage method dedicated Timers that are stored inside a cache. The key for retrieving a timer from the cache is the method signature (that also includes package and name of the class).

At the beginning, the cache is empty and when a method is profiled for the first time, a timer is instantiated and put in the cache by a cache loader.

The code of the new interceptor:
/**
 * An interceptor for profiling method executions. Starts a timer before method
 * execution and stops it after. Uses a cache for storing the different timers.
 */
@Profiled
@Interceptor
public class ProfiledMethodInterceptor {

    private static final Logger LOGGER = LoggerFactory.getLogger(ProfiledMethodInterceptor.class);
    /**
     * the cache for all timers. The key for an entry in the cache is the
     * signature of the method that is profiled. Max is 1000 timers.
     */
    private final LoadingCache<String, Timer> timersCache = CacheBuilder.newBuilder()
            .maximumSize(1000)
            .build(
            new CacheLoader<String, Timer>() {
                @Override
                public Timer load(String methodSignature) {
                    return Metrics.newTimer(ProfiledMethodInterceptor.class, methodSignature, TimeUnit.MILLISECONDS, TimeUnit.SECONDS);
                }
            });

    /**
     * gets the corresponding timer for the method being executed, starts it
     * before the execution and stops it after the method is proceeded.
     *
     * @param context the invocation context.
     * @return the object produced by the method execution.
     * @throws Exception if an error occurs.
     */
    @AroundInvoke
    public Object profile(InvocationContext context) throws Exception {

        // gets the corresponding timer
        String methodSignature = context.getMethod().toString();
        final Timer timer = timersCache.get(methodSignature);

        // starts it
        final TimerContext timerContext = timer.time();

        // executes the method that is profiled
        Object o = context.proceed();

        // stops the timer
        timerContext.stop();
        return o;
    }
}

With this interceptor you can now monitor several metrics like invocations per second, duration and percentiles directly via JMX, or by enabling  reporting in logs, CSV files or HTTP as explain there.

You need to update the beans.xml file to enable the new interceptor:
<?xml version="1.0" encoding="UTF-8"?>

    
        org.javabenchmark.ProfiledMethodInterceptor
    


Finally, the dependencies for Metrics and Guava:
        
        
            com.yammer.metrics
            metrics-core
            2.2.0
        
        
        
            com.google.guava
            guava
            13.0.1
        

Monday, February 25, 2013

Glassfish 3.1.2.2 Web Service Memory Leak

I recently installed Netbeans 7.3 with the Glassfish 3.1.2.2 combo, and decided to play with it. So i deployed a simple application that exposes an "echo" web service.
@WebService(serviceName = "EchoWebService")
public class EchoWebService {

    @WebMethod
    public String echo(@WebParam(name="text") String text) {
        return text;
    }
}
Then using JMeter, i stressed the service to see how the server performs: a simple test plan that consists of a group of 5 threads, each thread sending continuously a HTTP request to the web service. The request contains a SOAP envelope holding the parameter that must be echoed by the web service.

JMeter Sampler - HTTP Request with SOAP Envelope

This chart illustrates the performance monitored within JMeter (in request per second):
Echo Web Service - Requests per second
As you can see, after 2:24 mn the server starts performing very badly .. and after almost 5 minutes i stopped the load test.

During the test, i monitored the server's memory with JConsole:

JConsole - Heap Usage
The graph shows that objects created during the test are not freed by the garbage collector and at the end the JVM is spending most of the time in trying to free some memory.

At this point, i ran the test another time and forced a heap dump to got some clue about the memory leak.

The analysis reveals that the leak comes from the com.sun.enterprise.container.common.impl.managedbean.ManagedBeanManagerImpl component that holds an instance of a java.util.HashMap$Entry[] where 91% of the memory is accumulated:
Memory Analyzer Tool - Report
This array is full of org.glassfish.weld.services.JCDIServiceImpl$JCDIInjectionContextImpl instances (524 488 instances).

I do not know if it is a known issue, or if there is something wrong with my modus operandi.


Sunday, February 24, 2013

Benchmarking With JUnitBenchmark

If you are interested in doing some benchmark with JUnit, there is a very useful library for that: JUnitBenchmark. It has really nice features like:
  • Recording of execution time (average and standard deviation)
  • Monitoring garbage collector activity
  • Benchmark warm-up

To demonstrate what can be done with it, let's go with a trivial example comparing 2 components that handle String concatenation:
  1. StringBuilder from the JDK
  2. TextBuilder from Javolution

The first thing is to add dependencies for JUnitBenchmark and Javolution into the pom.xml:
        
        
        
        
            junit
            junit
            4.11
            test
        
        
        
        
            com.carrotsearch
            junit-benchmarks
            0.5.0
            test
        
        
        
        
            org.javolution
            javolution
            5.3.1
            test
        

Then here is the code for the JUnit Benchmark:
package org.javabenchmark;

import com.carrotsearch.junitbenchmarks.AbstractBenchmark;
import com.carrotsearch.junitbenchmarks.BenchmarkOptions;
import javolution.text.TextBuilder;
import org.junit.Test;

/**
 * Benchmark for String concatenation. Compares StringBUilder (JDK) and
 * TextBuilder (Javolution).
 */
public class StringConcatenationBenchmark extends AbstractBenchmark {

    public static final long LOOPS_COUNT = 10000000;

    @Test
    @BenchmarkOptions(benchmarkRounds = 3, warmupRounds = 1)
    public void stringBuilderBenchmark()  {
        
        StringBuilder builder = new StringBuilder();
        for (long i = 0; i < LOOPS_COUNT; i++) {
            builder.append('i').append(i);
        }
        System.out.println(builder.toString().length());
    }
    
    @Test
    @BenchmarkOptions(benchmarkRounds = 3, warmupRounds = 1)
    public void textBuilderBenchmark()  {
        
        TextBuilder builder = new TextBuilder();
        for (long i = 0; i < LOOPS_COUNT; i++) {
            builder.append('i').append(i);
        }
        System.out.println(builder.toString().length());
    }
}


And the output produced by the test on my machine (Ubuntu VM with 5Go and Core i7):

  • StringConcatenationBenchmark.stringBuilderBenchmark: [measured 3 out of 4 rounds, threads: 1 (sequential)] round: 0.68 [+- 0.06], round.block: 0.00 [+- 0.00], round.gc: 0.00 [+- 0.00], GC.calls: 6, GC.time: 0.19, time.total: 2.66, time.warmup: 0.64, time.bench: 2.03
  • StringConcatenationBenchmark.textBuilderBenchmark: [measured 3 out of 4 rounds, threads: 1 (sequential)] round: 0.47 [+- 0.01], round.block: 0.00 [+- 0.00], round.gc: 0.00 [+- 0.00], GC.calls: 5, GC.time: 0.18, time.total: 1.97, time.warmup: 0.57, time.bench: 1.41

Conclusion:
This micro-benchmark demonstrated that TextBuilder is 30% faster than StringBuilder (in our trivial test) and more stable (+- 0.01 compared to +- 0.06). GC metrics are similar in both cases.

Servlet Monitoring with Metrics

Metrics from Coda Hale is a wonderful library to ease the monitoring of the performance of your code. It gives you tons of metrics directly via JMX, logs or CSV files.

I use it to monitor the performance of servlet in a JEE6 application. It is very simple to set up:
  1. Add the following dependency to your pom.xml:
            
            
                com.yammer.metrics
                metrics-core
                2.2.0
            
    
  2. Declare a Timer:
        /**
         * the timer for monitoring incoming HTTP POST requests.
         */
        private final Timer postTimer = Metrics.newTimer(getClass(), "post-requests-timer", TimeUnit.MILLISECONDS, TimeUnit.SECONDS);
    
    This code instantiates a new Timer where duration is measured in milliseconds and rate in seconds (to compute request per second).

  3. Use the previous Timer when handling HTTP request:
        /**
         * Handles the HTTP
         * POST method.
         *
         * @param request servlet request
         * @param response servlet response
         * @throws ServletException if a servlet-specific error occurs
         * @throws IOException if an I/O error occurs
         */
        @Override
        protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
            // starts stopwatch
            final TimerContext context = postTimer.time();
    
            // handles request and response, calls business logic
            ...
    
            // ends stopwatch
            context.stop();
        }
    
  4. Put some load on your servlet and monitor it using JConsole for instance: