SpyGlass Tracer 1.8

Spyglass Tracer is a tool for real time Java and J2EE application monitoring. It is designed to help the production team to analyze behaviour of a Java/J2EE platform while working, and to find the actual causes for malfunctions, bottlenecks and decreases of performance and response time.

 

 

Spyglass Tracer is designed to collect metrics that describe in real time the behaviour of the analyzing system (Virtual Machine metrics, application components, SQL statements, user's transactions) and to show this information in interactive dashboards, highlighting weird variables.

SpyGlass Tracer is based on ByteCode Instrumentation (or BCI) that permits to monitor applications adding probes to loaded classes without any previous action on the source code. Collected information will be used to extract applicative metrics related to user actions, technical components, 3rd party applications or legacy libraries.

SpyGlass Tracer is totally based on standard Java technologies so you can expect it to work in any environment that follows Java specifications (any JVM and any Application Server).

 

SpyGlass Tracer is available in five versions:

 

  • SpyGlass Tracer Free: it is freely available from SpyGlass Tools site, with an activation that key can be generated for free. This version offers a general level of instrumentation and support for standard J2EE components with visibility of user transactions. This version can be executed only on a single processor machine.
  • SpyGlass Tracer Basic: it is the basic commercial version that can be executed with machines with at maximum four logic processors. This version offers same features of free version with an extended support for standard J2EE components and fully supports tracing for SQL statements, JVM metrics and most common framework.
  • SpyGlass Tracer Pro: it is a professional tool for real time monitoring for complex Java/J2EE applications. This version has same features as Basic version with in addition the possibility to customize directives for instrumentation allowing monitoring also non-standard Java components and supporting open-source frameworks. Also you can execute instrumentation on-the-fly, detailed analysis of multithread applications, http request and session analysis, hotspot profiler analysis and automatic report generation. No processors limits.
  • SpyGlass Tracer Enterprise: it has the same features of the Pro version with in addition the recording features and historical analysis. Also, you can define alarms with related actions to be automatically executed if a particular condition occurs. A web interface can be used to analyze collected information and configure all arms and actions.
  • SpyGlass Tracer Cloud: it is a professional version with optimized configurations to work with application deployed on most common cloud services.

 

Architecture

 

SpyGlass Tracer is composed of three main components:

 

  • An agent, configured on the JVM where the application to be monitored is executed, instruments the code, collects all information and publishes them;
  • A client, implemented as a VisualVM plugin, connects to the agent, collects published information and visualizes them for the performance analysis
  • A server, connected to many agents at the same time that store all information, offers a web interface to navigate information, permits to define alarms and executes related actions (available only for Enterprise and Cloud versions)

 

 

Thanks to its minimal request in terms of CPU time and memory, SpyGlass Tracer can be used in production environment to monitor real time application metrics for any Java platform.

SpyGlass Tracer is also the perfect companion in development or test environment to monitor and document the dynamic behaviour of Java applications.

 

Main features

 

Spyglass Tracer main features are:

 

  • Collecting information on user transactions (traces) executed since user request to delivery of the server response
  • Analysis with different and dynamic aggregation levels of information as tree view and as graphic diagram
  • Monitoring behaviour of standard J2EE components (Servlet, JSP, JSF, Session Bean, Entity Bean, Message Driven Bean) and most used open source frameworks (Struts, Quartz)
  • Collecting metrics and statistics related to any SQL statement executed
  • Collecting all of the information related to the evolution of the status of thread and locks during execution of monitored applications
  • Monitoring most relevant memory metrics, garbage collector and many different information related to the activity and the status of the JVM
  • Manual activation and deactivation of the agent
  • Saving execution sessions for offline analysis

 

Java/J2EE Application Monitoring

 

To ensure high performance of a modern Java/J2EE platform, we have to be able to monitor its behaviour in its production environment. In fact it is known that many unexpected behaviours and bottlenecks can be experimented only while the application is executing in real environment with real users and real data and not in any other previous stage. That’s why an efficient monitoring tool should have:

 

  • Low impact on system performance
  • Easy configuration and flexibility
  • Independence from source code availability
  • Easy identification and understanding of the collected information
  • Useful aggregations of the different metrics to identify critical situations: end-to-end transactions, by component, by the application layer
  • Details about how resources are used
  • Details about connections with external resources
  • Drill down tools to analyzes the bottlenecks identifying, with the most appropriated metrics, components to be optimized
  • Administrative tools to define alarms and execute actions to solve it
  • Report relevant information about the weird behaviour as a sharable basis for finding the right solution on how to avoid them.

 

Automatic instrumentation (BCI) is the most modern approach to monitoring a Java application: following this technique we can automatically insert dedicated probes during class loading while the application is executing. These probes are the hooks to collect metrics that will be exposed to external clients. The main advantages of this approach are:

 

  • Independence from source code: no source code manipulations are required. You could also instrument third party libraries
  • Limited overhead
  • Metric activation and deactivation on the fly
  • Usage of standard technologies (ByteCode Instrumentation) available on all 1.5 or superior certified JVMs
  • Extraction of metrics using configuration files and not writing code

 

It is possible to monitor different information:

 

  • Average time to complete a user request (such as login feature of a web application)
  • Number of elaborations in a defined time window (such as how many invoices have been deployed in one minute)
  • Life cycle of an application component (such as how a job is effectively managed by the system)
  • Resources usage: CPU, Threads, Memory, Garbage Collector, JIT Compiler, loaded classes
  • Lock or waiting events on threads, during applicative request elaboration
  • Number and duration of SQL statements and effective usage of database connections
  • Number and duration of request to external systems (Java Connector)
  • Data transfer
  • Usage of a web session

 

After all this information has been extracted, it is possible to define some management scenarios, defining alerts and also automatic actions to be executed when alerts trigger:

 

  • Run ad hoc scripts when some specific resource is exhausted
  • Run specific actions when an application component is executed (such as saving the information that an invoice has been deployed) or updated to a given status (such as tracing when and invoice has been delivered)

 

Goal of monitoring

 

The final goal of monitoring is to expose all information needed to ensure maximum quality of Java application execution.

We can define 3 situations how to reach this goal:

  • I don’t know what I need nor how to retrieve it: this is the first stage to manage performance and it is common at the beginning of monitoring exploration. A tool like SpyGlass Tracer offers an organized approach how to face the situations and show all most relevant information in an efficient presentation.
  • I know what I need but I don’t know how to retrieve it: this is the level when the performance manager knows basic techniques of application monitoring but doesn’t have an efficient tool to retrieve all needed information. SpyGlass Tracer is a professional tool to monitor in real time the system and let you retrieve all needed information to analyse and understand how to fix weird situations.
  • I know what I need and how to retrieve it: at this level the performance manager has a strategy to use tools to efficiently monitor performance. SpyGlass Tracer is the right tool to help performance manager, adapting, in the same time, to different work methodologies and showing all needed information.

 

 

Collecting metrics

 

Managing performances of a complex J2EE system is a particularly complex activity that requires a wide range of competencies (such as Hardware Architecture, Operative System, Network, Security, DBMS, JVM, Application Server, and framework).

Sometimes it may happen that a system, that behaves correctly in test environment, after production release show bottlenecks and slow performance or not working at all: all configuration parameters seems to be correct but the system is, in short words, not usable.

Sometimes the team that manages production environment reports weird behaviour that cannot replicate in other environments.

In all of these situations we need a tool can “look into” the application like a radiograph and describe what is happening in real time.

 

Practically we need to collect working metrics. Typical metrics are:

 

  • Response time
  • Throughput
  • Execution time for each method
  • Context execution metrics: hardware, Operative systems, JVM, Application Server

 

 

From metrics to information

 

After that all of the metrics have been extracted, we need to transform these data into information, meaningful from architectural, functional and applicative perspective:

 

  • Aggregate executions into user transactions to monitor specific service levels
  • Aggregate information by logical or architectural components so it is possible to identify not performing components
  • Identify the behaviour of resources in relation with specific executed functions
  • Monitor critical events about resources management (such as thread lock, activity of the Garbage Collector into memory)
  • Identify activities on databases and/or external applications/systems

 

 

Remove background noise

 

Now we have a huge amount of data, and weird behaviours are hidden into this ocean of information that is mostly related to normal behaviour and so not useful for our analysis.

That’s why we have to clear not-useful data using some “filters”, so we can remove the background noise and highlight only relevant information.

 

Sharing information

 

Next step is to identify requests that are not executed following our performance requirements and analyze them with the goal to understand the causes of these unexpected behaviours.

We can generate a report for each identified critical situation happened in production collecting and aggregating all available information. Then we could define alarms that automatically check these critical situations and eventually send an alert or execute an action.

Using periodical reports we can monitor the respect of defined service level agreements.

 

Reactive or Proactive approach?

 

Now that we have this amount of data we can decide how to use them and how to manage the monitored platform:

 

  • Reactive approach: we’ll wait for an alarm, and only after that we analyse available information to identify a solution for the weird situation
  • Proactive approach: we’re using information about trends to prevent happening of critical events

 

 

 

Why using SpyGlass Tracer

 

SpyGlass Tracer is designed to monitor production environment platforms extracting technical and business information required to diagnose problems and alerting operation team.

Once installed, the Agent starts to extract configured metrics giving an unprecedented visibility on your platform activity. Monitoring sessions will enable you to identify critical application areas and system bottlenecks before they become a problem for your users.

SpyGlass Tracer Agent is characterized by a small memory and CPU footprint and a very limited influence in application activity.

SpyGlass Tracer does not require any source code modification nor any configuration updates a part of the java-agent configuration. So it is easy to activate and deactivate it.

SpyGlass Tracer is based on standard java technologies and can be used on any certified Java Virtual Machine and Application Server.

In few words, SpyGlass Tracer is simple to install, simple to use and very effective because it is designed with application performance management tasks in mind.

 

SpyGlass Tracer in Load-Test and Pre-Production environment

 

SpyGlass Tracer has been designed to work in Production environment, but we can use it also in other situations.

For example in load-test environment we can use it to monitor how the system reacts to load tests, monitoring resources usage or executed code.

The ability to have an image of the execution platform in “protected” environment helps to identify, by difference, weird behaviours in production stage allowing you to apply a possible counter-measure before situation is getting a problem for the users.

At last we can use all collected information to create a standard documentation that could describe the platform execution behaviour.

 

SpyGlass Tracer in Q/A environment

 

SpyGlass Tracer could be very useful to verify the quality of developed source code, giving relevant statistical information such as:

 

  • Number of executed SQL statements (useful for tuning a framework like Hibernate)
  • Correct JDBC connections management
  • Correct exceptions management
  • Extensive resources usage such as memory, cpu
  • Verify calls structure for validating refactoring level
  • Verify thread synchronization problems

 

SpyGlass Tracer in Development environment

 

In development environment SpyGlass Tracer can be used to automatically generate technical documentation to describe sequence diagram, SQL statements, application components used.

Created reports can be attached to usual technical documentation as a description of functional implementation.

Even if it is not a profiler, SpyGlass Tracer can be configured to collect detailed information about the response behaviour of the application under stress condition up to method level.

 

Quick Start

 

First configuration of SpyGlass Tracer is very easy.

If you’re testing a commercial version you should have received jars and license by mail.

If you’re testing free version you can download jars on SpyGlassToools.com site and generate license as described in next paragraphs.

 

The following steps should be followed:

 

·        Getting SpyGlassTracer.zip file

·        Installing the agent

·        License setup

·        Starting of the server and verification of correct agent start-up

·        Installing the plugin and its configuration

Download the SpyGlassTracer.zip file

 

If you’re testing a commercial version of SpyGlass Tracer you should already have been provided with a working version of agent and plugin.

If you0re testing a free version of SpyGlass Tracer you should go to the SpyGlass Tools site (www.spyglasstools.com/downloads) and download last release of the package that contains two files:

 

  • SpyGlass Tracer Agent: this file (a jar file) is the agent needed to instrument the application
  • SpyGlass Tracer Plugin: this file (extension “.nbm “ as NetBeans Module) is the plugin to be installed in VisualVM

 

Installation of the agent

 

To install the agent you should follow these steps (required both for commercial and free licenses):

 

  • Create a directory on the machine where your java platform is executed
  • Copy in the above directory the required files for the agent: SpyGlassAgent.jar, license file (if available) and eventual configuration file (needed only for commercial releases from Pro and better version)

 

Now we have to modify the script that starts the application you’re going to monitor. In particular we have to add the following java option:

 

-javaagent:<agent-path>SpyGlassAgent.jar

 

By default the agent will publish information on the ip address defined in the license file, using the 6009 port; eventually you can change this configuration specifying the needed address:

 

-Dconcept.by.tracer.ip=<ip-address>

-Dconcept.by.tracer.port=<port-number>

 

NB: Keep in mind that this configuration will work only if coherent with agent license.

Creation and setup of the free license

 

If you’re testing a commercial version you should contact SpyGlass Tools support (support@spyglasstools.com) and require a valid license submitting your contract id and server information (ip address, CPU number, Operative System, version number).

You’ll receive a license file that you have to copy in your agent directory.

 

If you have downloaded the free version of SpyGlass Tracer you have to generate a free license and install it.

The needed license can be generated for free in the following URL:

http://www.spyglasstools.com/products/spyglass-tracer/spyglass-tracer-free-edition/generate-free-key.

 

 

When all required information is inserted, press the “generate” button.

After that you’ll be able to download the license file (extension “.lic”) that you have to copy in the folder where you have installed the agent.

 

NB: in the “Your server IP” field you have to type the exact address where you need to publish the extracted metrics.

NB2: The expiration date for the free license is fixed as 1 month from the creation date.

 

 

Start of the server and verification of correct load of the agent

 

At this step, if all previous actions have been correctly executed, you can start the application server:

 

 

Check in the logs the message “LICENSE IS VALID” that confirms that the agent has been started in the right way.

 

If for some reason the license file is not correct, when you start the application you should see a message like the following (this example has a wrong expiration date):

 

 

 

Plugin installation and configuration

 

All of the extracted metrics can be visualized using a GUI client that developed as a VisualVM plugin.

VisualVM is freely distributed, together with all of SUN/Oracle JDKs starting from version 1.6. On the contrary you can download VisualVM from the following URL:

 

http://visualvm.java.net/download.html.

 

Normally, if you’re using the SUN/Oracle JDK, you can find the “jvisualvm.exe” file (jvisualvm for Linux) in the “bin” directory of the JDK directory. If you have downloaded VisualVM, you can find the exe file in the “bin” directory of the VisualVM directory.

 

When started, VisualVM shows the following screen:

 

 

If you’re going to monitor a local running JVM you should see it in the left panel. In our example we’re working with JBoss (PID 3652) on a Windows machine.

Using the Tools/Plugin menu start the installation of the SpyGlass Tracer plugin:

 

 

At this step select the “Downloaded” tab.

Select the “Add Plugins…” button and select the downloaded SpyGlassPlugin.nbm file.

 

 

At last, press the “Install” button to proceed with the installation.

 

 

At the first installation you should see a message related to the certification that is not verified (not trusted): skip this message and continue.

 

NB: at the end of the installation we suggest to close and restart VisualVM.

 

After installation, on the left panel you should see the SpyGlass Tracer plugin icon.

 

 

Select this icon with the right button to show “Add SpyGlass Tracer agent…” to show the configuration window of the SpyGlass Tracer plugin: IP address and port of the agent:

 

 

After inserting ip and port you should see plugin starting:

 

 

When you see the status as “connected” you know that the plugin is receiving data.

So you can start to work with your monitored application and you should see the plugin that is receiving metrics and transactions.

 

As a further verification you could go to the “JVM” panel and check you’re receiving data on memory and CPU.

 

Updates on this release

 

Release 1.8 of SpyGlass Tracer integrates a lot of new features that simplify its usage and help to identify bottlenecks:

 

  • The new “Profiler” area where you can analyze, based on different aggregation levels, information related to the monitoring session: execution times, JDBC statements and exceptions. This section simplifies identification of performance issues and unexpected behaviours.

 

 

  • The new “url” panel aggregates information about duration of all calls at url level; in this way you can immediately identify calls that last more the usual:

 

 

  • The new “hotspots” panel aggregates information about duration of calls at method level; in this way you can immediately identify methods that are most time consuming. “Inherent time” columns shows the execution time of a method without the execution time of nested instrumented sub-methods:

 

 

  • The new “memory” panel aggregates information about memory usage of the different the calls; in this way you can immediately identify calls with highest memory consumption:

 

 

  • The new “cpu” panel aggregates information related to CPU usage of the different calls; in this way you can immediately identify calls with highest CPU consumption

 

 

  • The new “waiting” panel aggregates information related to waiting time of application thread of different calls that are in waiting status; in this way you can immediately identify synchronization issues:

 

 

  • The new “blocked” panel aggregates information related to waiting time of application thread of different calls that are in blocked; in this way you can immediately identify synchronization issues:

 

 

  • The new “sql” panel aggregates information related to execution time of different sql statements for all of the different calls to JDBC drivers; in this way you can immediately identify critical SQL statements:

 

 

  • The new “connection” panel show information about monitoring JDBC connections: creation and usage are traced so that you can immediately identify connection leaks and configure the right dimension of the connection pool:

 

 

  • The new “exception” panel show last exceptions identified by the agents with related details and messages:

 

 

  • Now (for SUN/Oracle JVM 1.6.24 or latest) it is possible to monitor memory usage at single transaction level; in this way you can immediately see how memory is effectively used and identify calls with highest memory consumption:

 

 

  • There is a new report where all relevant information are showed in one place:

 

 

SpyGlass Tracer Agent

 

SpyGlass Tracer Agent (later only the Agent) is the component that must extract metrics that describe the execution of the Java/J2EE application and make them available to external clients.

SpyGlass Tracer Agent has been designed to have a small memory and CPU footprint (overhead), so you can use it efficiently in your production environment.

SpyGlass Tracer Agent is based on standard Java technologies and you can use it on all of the application running on certified JVMs from version 1.5 or better.

 

Instrumentation

 

The instrumentation technique (ByteCode Instrumentation - BCI) is the basis of agent “how-to-work”; in fact it permits the automatic modification of the classes’ byte-code inserting probes that collect execution metrics.

 

Instrumentation is a standard Java technology available since JDK 1.5 release, and it is available in all third party certified JVMs (such as JRockit or IBM JDK).

Usage of JVM 1.5 or above is the basic requirement for the agent.

The approach based on instrumentation lets you extract information about effective behaviour of the application without any modification at source code level. It is also possible to capture metrics from external libraries for which the source code is not available at all (for example JDBC libraries provided by the third party’s vendors).

 

Instrumentation is applied inserting a well-defined set of byte-codes (probe) that let you measure the needed information (metric).

 

 

Instrumentation could be executed in 3 different ways:

 

  • Static instrumentation: this technique requires the instrumentation of all classes as part of application deployment process. You have to be sure that instrumentation is executed after any class building. For applicative classes you can define an instrumentation task in your building script (ant, maven or similar). Instrumented classes are the packaged replacing the original ones. It is also important to remember that usually you have to instrument also non-applicative classes (for example, if you need to extract metrics on SQL statements, you need to instrument the jar file of the JDBC drivers). Non-applicative jars can be instrumented only once (after any configuration file modification) and then deployed. This approach is very expensive and it is usually avoided.
  • Dynamic instrumentation: this technique is based on the fact that it is possible to intercept the class byte-code when the class is loaded in the class-loader. So the byte-code is read, passed to a procedure that checks a configuration file and decides if instruments its code, and then loaded in memory and executed. The big plus of this approach is that you do not have to worry about all classes will be previously instrumented because it is executed automatically during their effective loading in memory. The agent must be properly configured at JVM start-up to intercept the class loading. This is usually the preferred approach.
  • On-the-fly instrumentation: this approach let you inject the agent and instrument byte-code when a class has been already loaded in JVM and executed; obviously this class should not be already (statically or dynamically) instrumented. In this case the agent, when it is loaded, ask to JVM a reload of the classes to instrument the code following the dynamic approach: this operation may require a long interval of non-operation of the application to reload all of the classes, but has the advantage to instrument a whole non-instrumented application

 

The simplest and most practical way to work is the dynamic instrumentation, and we suggest following it using SpyGlass Tracer.

 

SpyGlass Tracer modifies byte-codes using the ASM library (http://asm.ow2.org).

 

Configuring the agent

 

The agent is based on 3 different files that should be placed in the same directory:

 

  • SpyGlassAgent.jar: (mandatory) the library of the agent with its Java classes
  • SpyGlassTracer.lic: (mandatory) the license file, needed for executing the agent
  • SpyGlassAgent.config: (optional) the configuration file is used to define what agent should do or do not. This file is not needed for free and basic version.

 

  • The dynamic instrumentation is configured in the JVM start-up script using the javaagent property:

 

  • -javaagent:<agent-path>SpyGlassAgent.jar to configure the agent to execute classes instrumentation
  • -Dconcept.by.tracer.ip=<ip-address> to configure the IP address where to publish collected information
  • -Dconcept.by.tracer.port=<port-number> to configure the port used to publish collected information (if not defined, by default 6009 is used).

 

Configuring the agent with Tomcat

 

To configure Tomcat you’re suggested to modify the “catalina.bat” file (or the similar file on Linux or UNIX). The final file should be something like the following (in bold added statements):

 

%_EXECJAVA%

-javaagent:<agent-path>SpyGlassAgent.jar

-Dconcept.by.tracer.ip=1.2.3.4

-Dconcept.by.tracer.port=1234

%JAVA_OPTS% %CATALINA_OPTS% %DEBUG_OPTS%

-Djava.endorsed.dirs="%JAVA_ENDORSED_DIRS%" -classpath "%CLASSPATH%"

-Dcatalina.base="%CATALINA_BASE%" -Dcatalina.home="%CATALINA_HOME%"

-Djava.io.tmpdir="%CATALINA_TMPDIR%" %MAINCLASS% %CMD_LINE_ARGS% %ACTION%

 

With the same goal you could add the same above options to the JAVA_OPTS variables.

 

Configuring the agent with JBoss 5.x

 

To configure JBoss you’re suggested to modify the “run.bat” file (or the similar file on Linux or UNIX). The final file should be something like the following (in bold added statements):

 

"%JAVA%"
-javaagent:<agent-path>SpyGlassAgent.jar

-Dconcept.by.tracer.ip=1.2.3.4

-Dconcept.by.tracer.port=1234

%JAVA_OPTS% ^

-classpath "%JBOSS_CLASSPATH%" ^

org.jboss.Main %*

 

Configuring the agent with JBoss 7.x

 

The latest release of JBoss, common known with AS7, has integrated a new model to manage resources (such as classes and logging) based on OSGi, that’s why you have to tell to the application server to not transform agent classes.

So first you have to check that the following Java options are inserted in the start-up script:

 

-javaagent:<agent-path>SpyGlassAgent.jar

-Dconcept.by.tracer.ip=1.2.3.4

-Dconcept.by.tracer.port=1234

 

After that you have to tell to JBoss to exclude from automatic loading process some packages:


-Djboss.modules.system.pkgs=org.jboss.byteman,by.concept,org.jboss.logmanager

 

And finally, you have to tell to JBoss how to manage logging classes (some path could differ from version to version):


-Djava.util.logging.manager=org.jboss.logmanager.LogManager
-Xbootclasspath/p:%JBOSS_HOME%/modules/org/jboss/logmanager/main/jboss-logmanager-1.2.2.GA.jar
-Xbootclasspath/p:%JBOSS_HOME%/modules/org/jboss/logmanager/log4j/main/jboss-logmanager-log4j-1.0.0.GA.jar
-Xbootclasspath/p:%JBOSS_HOME%/modules/org/apache/log4j/main/log4j-1.2.16.jar

 

 

Configuring the agent with other application servers

 

Simplest manual configuration is based on the modification of the script to launch the JVM adding required agent options (for example, in windows we’re referring to the line where the “java.exe” file is launched). Here all needed parameters to configure the agent must be integrated:

 

  • -javaagent:<agent-path>SpyGlassAgent.jar to configure the agent to execute classes instrumentation
  • -Dconcept.by.tracer.ip=<ip-address> to configure the IP address where to publish collected information
  • -Dconcept.by.tracer.port=<port-number to configure the port used to publish collected information (if not defined, by default 6009 is used).

 

The SpyGlassAgent.config file

 

The SpyGlassAgent.config file, available in the agent, lets you customize the behaviour of the agent.

This file is not used (i.e. is ignored) in the free and basic version.

 

The “SpyGlassAgent.config” is an XML file and can be edited using a common text editor.

 

In the XML file you can identify different sections, related to their usage:

 

  • layers: to specify components to be instrumented in addition to the standard J2EE ones (already automatically instrumented)
  • agent: to specify core configuration of the agent
  • plugin: to configure the VisualVM plulgin
  • alerts: to configure alert’s threshold and traces

 

An example:

 

<?xml version="1.0"?>

<config>

<layers>

<layer name="Annotation" tabpos="106">

<classes annotation="com.spyglasstools.tracertest.instrumentation.Annotation" group="{FULL_CLASS_NAME}">

<methods regexp="(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

</layers>

<agent>

<include package="org.apache.tomcat.dbcp,org.jboss.jca.adapters.jdbc,java.lang.NullPointerException"/>

<exclude package="com.ctc,com.ibm,com.arjuna,EDU,com.google,antlr,org.codehaus,org.dom4j, org.picketbox,org.jnp,org.jnp,org.slf4j,org.jcp,org.primefaces,bsh"/>

<log destinations="file" level="INFO"/>

<metrics enabled="true" sample-time="5000"/>

<trace min-duration="0" show-pending="true"/>

<filterRoot layers="scheduled,persistence"/>

<queries max-length="200"/>

<jdbcConnection enabled="true"/>

<exceptions stack-size="5" enabled="true"/>

<jvm enabled="true" sample-time="1000"/>

<threads enabled="true" sample-time="200" max-depth="10"/>

<socket port="6009"/>

<overview max-active-user-time="12000"/>

</agent>

<plugin>

<log level="INFO"/>

<chart top-number="10" h-resolution="60"/>

<trace expiration-time="-1" max-items="200" recent-life-time="5000"/>

<queries max-items="100"/>

<exceptions max-items="50"/>

<numeric group-separator="." decimal-separator=","/>

</plugin>

<alerts>

<alert property="DURATION" critical-threshold="10000" warning-threshold="0" relation="GREATER_THAN"/>

<alert property="MEMORY" critical-threshold="1024" warning-threshold="0" relation="GREATER_THAN"/>

<alert property="EXCEPTIONS" critical-threshold="1" warning-threshold="0" relation="GREATER_THAN"/>

</alerts>

</config>

 

The Agent section

 

This is the list of the available configuration options for the agent:

 

  • include: here you can list the packages/classes that could be instrumented. This list has greater priorities than the below exclude clause. Reducing the list of the included classes will improve performances of the class-loading process and so of the application start-up.

 

<include package="org.apache.tomcat.dbcp,org.jboss.jca.adapters.jdbc,java.lang.NullPointerException"/>

 

  • exclude: here you can list the package/classes that you don’t want to instrument in any case. Increasing the list of excluded classes will improve performances of the class-loading process.

 

<exclude package="com.ctc,com.ibm,com.arjuna,EDU,com.google,antlr,org.codehaus,org.dom4j, org.picketbox,org.jnp,org.jnp,org.slf4j,org.jcp,org.primefaces,bsh"/>

 

  • log: here you can define agent log properties:
    • destination: where the log will be available. Possible values are file (file is saved in log directory inside agent installation directory) or console
    • level: how the log will be detailed. Default value is INFO. For support usage you could define SUPPORT, this second value impacts on agent’s performance but it is useful for troubleshooting.

 

<log destinations="file" level="INFO"/>

 

  • metrics: here you can define how metrics will be collected from the instrumented components defined in the layers section and published:
    • probes: enables metrics collection and publishing. Accepted values are true or false
    • sample-time: defines sample interval for metrics in milliseconds. This is also used as update interval in the plugin.

 

<metrics enabled="true" sample-time="5000"/>

 

  • trace: here you can define how collected trace will be managed:
    • min-duration: defines the minimal duration in milliseconds of traces to be published. If a trace has a duration less than min-duration it will be filtered out
    • show-pending: enables publishing for pending traces

 

<trace min-duration="0" show-pending="true"/>

 

  • filterRoot: here you can define to ignore non applicative executions
    • layers: is a comma separated list of values to define layers to be filtered: the scheduled value is used to exclude Quartz executions; the persistence value is used to exclude SQL statements not related to applicative executions.

 

<filterRoot layers="scheduled,persistence"/>

 

  • queries: here you can define how to collect metrics on SQL statements:
    • maxLength: defines the maximal length in characters of the captured string/statement. Statements longer will be truncated.

 

<queries max-length="200"/>

 

  • jdbcConnection: here you can define how to collect metrics on JDBC connection:
    • enabled: enables JDBC connection metrics collection. False disables metrics collection.

 

<jdbcConnection enabled="true"/>

 

  • exceptions: here you can define how to collect metrics on exceptions:
    • enabled: enable metrics collection
    • stack-size: defines the number of captured lines of the stack-trace to be collected

 

<exceptions stack-size="5" enabled="true"/>

 

  • jvm: here you can define how to collect information on JVM activity (memory pools and runtime metrics). JVM data are collected sampling JMX MBean exposed by JVM. You’re suggest to not use interval below 1000 ms.
    • enabled: enables metrics collection
    • sample-time: defines sample interval in milliseconds for metrics

 

<jvm enabled="true" sample-time="1000"/>

 

  • threads: here you can define how to collect information on application threads. Thread information is captured sampling Threading MBean. Too small sample-time can cause overhead and locks on running application. You’re suggest to enable this in production only for a limited time and not use values below 200ms.
    • enabled: enables metric collection
    • sample-time: defines sample interval in milliseconds
    • max-depth: define the number of captured lines of the stack-trace to be collected for waiting and blocked threads

 

<threads enabled="true" sample-time="200" max-depth="10"/>

 

  • socket: here you can define the socket used to publish collected metrics:
    • port: define the socket port

 

<socket port="6009"/>

 

·        overview: here you can define parameters for the “overview” panel in VisualVM plugin

  •  
    • max-active-user-time: a user is considered active if he/she posts at last a request in this period of time (12.000 is equals to 2 minutes)

 

<overview max-active-user-time="12000"/>

 

The Plugin section

 

Let’s see what are the possible configuration for the VisualVM plugin:

 

  • log: here you can define how the plugin should log its activities:
    • level: how the log will be detailed. Default value is INFO. Other values (DEBUG and SUPPORT) will be useful only for debugging and will be suggested by support team.

 

<log level="INFO"/>

 

  • chart: here you can define how the plugin charts will be shown:
    • top-number: define the number of items evaluated by the Top Peak and Top Aggregated selectors. Suggested value is 10.
    • h-resolution: define the maximum number of samples available in each chart. Default value is 60.

 

<chart top-number="10" h-resolution="60"/>

 

  • trace: here you can define the behaviour of the “trace” panel:
    • expiration-time: define how long a trace will remain in the “running” list (milliseconds). Defining an expiration time is useful to minimize plugin memory requirements.
    • max-items: define maximum number of items available in the list before removing the older ones. Increasing max-items value can require more memory for the plugin.
    • recent-life-time: define how long a trace will be highlighted as new in the list (milliseconds). This value is usually equals to plugin update interval.

 

<trace expiration-time="-1" max-items="200" recent-life-time="5000"/>

 

  • queries: here you can define the Profiler/queries panel behaviour:
    • max-items: define maximum number of items available in the list

 

<queries max-items="100"/>

 

  • exceptions: here you can define how the Profiler/exception panel behaviour:
    • max-items: define maximum number of items available in the list before removing the older ones

 

<exceptions max-items="50"/>

 

 

The Alerts section

 

Alerts are used by plugin to populate Notification tab: every time a monitored metrics verified an alert rule a new line is added.

Possible configuration for the alerts are:

 

  • property: define the name of the property to check. Defined properties are: DURATION (trace duration), MEMORY (trace used memory), EXCEPTIONS (trace exception number), DB-CALLS (trace database calls)
  • critical-threshold: define the critical level of the property that fires a new notification
  • warning-threshold: define the warning level of the property that fires a new notification
  • relation: define the comparison type; possible values are GREATHER-THAN, SMALLER-THAN

 

Here you can find some example of alert configurations:

 

<alert property="DURATION" critical-threshold="10000" warning-threshold="0" relation="GREATER_THAN"/>

<alert property="MEMORY" critical-threshold="1024" warning-threshold="0" relation="GREATER_THAN"/>

<alert property="EXCEPTIONS" critical-threshold="1" warning-threshold="0" relation="GREATER_THAN"/>

 

 

Configuring components to be instrumented

 

Most recent version of SpyGlass tracer let you extend the list of classes and components to be instrumented.

Components to be instrumented are grouped in layers, groups and items to simplify aggregated visualizations in plugin. A possible approach to organize metrics could be the following: a layer can be a type of component (for example a “servlet”), a group can be a class (for example “ActionServlet”) and an item can be a single instrumented method for that class (for example “doGet”).

Standard J2EE components are grouped in separated layer (SessionBean, Servlet, JSP and so on).

 

The basic information you have to setup for instrumenting classes is:

 

  • name: logic name of the layer
  • tabpos: tab position in the layer panel in VisualVM plugin
  • classes name: class selector
  • method name: method selector
  • group: logic name of the group.
  • item: logic name of the item

 

Below you can see an example of the layer section:

 

<layers>

<layer name="Specific" tabpos="100">

<classes name="com.spyglasstools.tracertest.instrumentation.SpecificClass" group="{FULL_CLASS_NAME}">

<methods regexp="(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

<layer name="SuperClass" tabpos="102">

<classes extends="com.spyglasstools.tracertest.instrumentation.SuperClass" group="{FULL_CLASS_NAME}">

<methods regexp="(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

<layer name="Interface" tabpos="104">

<classes implements="com.spyglasstools.tracertest.instrumentation.Interface" group="{FULL_CLASS_NAME}">

<methods regexp="(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

<layer name="Annotation" tabpos="106">

<classes annotation="com.spyglasstools.tracertest.instrumentation.Annotation" group="{FULL_CLASS_NAME}">

<methods regexp="(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

<layer name="RegExp" tabpos="108">

<classes regexp="com\.spyglasstools\.tracertest\.instrumentation\.RegExp.*" group="{FULL_CLASS_NAME}">

<methods regexp="(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

</layers>

 

In this example we define:

 

  • a tab called “Specific” to show execution information for all the methods of the class com.spyglasstools.tracertest.instrumentation.SpecificClass
  • a tab called “SuperClass” to show execution information for all the methods of all the classes extending com.spyglasstools.tracertest.instrumentation.SuperClass
  • A tab called “Interface” to show execution information for all the methods of all the classes implementing com.spyglasstools.tracertest.instrumentation.Interface
  • A tab called “Annotation” to show execution information for all the methods of all the classes annotated with com.spyglasstools.tracertest.instrumentation.Annotation
  • A tab called “RegExp” to show execution information for all the methods of all the classes with name that matches regexp regular expression

 

It is possible to use multiple class selectors at the same time to narrow selected classes for a specific layer.

 

The class selector

 

The class selector identifies class that have to be instrumented.

A layer can group many different classes, so it is possible to use many class nodes for any layer node.

 

Each class can be identified by:

 

  • name: its own name
  • extends: the name of an ancestor class
  • implements: the name of an implemented interface
  • annotation: the name of a used class annotation
  • regexp: a regular expression matching its class name[1]

 

A logic name can be defined using some variable defined in the agent memory:

 

  • {FULL_CLASS_NAME}: complete name of the class included in its own package
  • {CLASS_NAME}: name of the class without its own package

 

The method selector

 

Using the “method” node you can define which method to be instrumented.

 

Each method can be identified using:

 

  • name: using its own name
  • regexp: using a regular expression[2]

 

Some Examples

 

Instruments all methods of a class:

 

<layer name="Specific" tabpos="100">

<classes name="com.spyglasstools.InstrumentedClass"

group="{FULL_CLASS_NAME}">

<methods regexp="(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

 

Instruments “int add(int I, int j)” method of a class:

 

<layer name="test1" tabpos="1001">

<classes name="com.spyglasstools.InstrumentedClass"

group="{FULL_CLASS_NAME}">

<methods name="add(II)I" item="{METHOD_NAME}"/>

</classes>

</layer>

 

Instruments all add and sub methods in a class:

 

<layer name="test1" tabpos="1001">

<classes name="com.spyglasstools.InstrumentedClass"

group="{FULL_CLASS_NAME}">

<methods regexp="add()(.)*" item="{METHOD_NAME}"/>

<methods regexp="sub()(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

 

Instruments all methods of a class that implements an interface:

 

<layer name="ejb2.1" tabpos="1002">

<classes implements="javax.ejb.SessionBean"

group="{FULL_CLASS_NAME}">

<methods regexp="(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

 

Instruments all methods of an annotated class:

 

<layer name="ejb3" tabpos="1003">

<classes annotation="javax.ejb.Stateless" group="{FULL_CLASS_NAME}">

<methods regexp="(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

 

Instruments all methods of all classes with name ending with “DAO”:

 

<layer name="test2" tabpos="1004">

<classes regexp="(DAO)$" group="{FULL_CLASS_NAME}">

<methods regexp="(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

 

Instruments all classes in the package “com.company.toinstrument” and its sub packages:

 

<layer name="RegExp" tabpos="108">

<classes regexp="com\.company\.toinstrument.*" group="{FULL_CLASS_NAME}">

<methods regexp="(.)*" item="{METHOD_NAME}"/>

</classes>

</layer>

 

 

 

 

On-The-Fly instrumentation

 

SpyGlass Tracer supports also on-the-fly instrumentation (also known as “hot instrumentation”).

This feature, available since Java 1.6, lets you inject agent jar after JVM start-up, without using the “–javaagent” option.

The biggest advantage of this approach is that you can collect metrics also for application that, when launched, were not configured to be instrumented.

On-the-fly instrumentation requires the following steps:

 

  1. Identification of the PID (Process ID) of JVM to be instrumented
  2. Injection of the agent
  3. Reload and instrumentation of the classes
  4. Collection of the applicative metrics

 

It is important to keep in mind that when you’re executing the points 2, 3 and 4 the application is not fully responsive so real users could experience relevant issues using the application.

When the step 4 is completed, the application will be efficient as before but also with the agent collecting and publishing metrics.

 

How to identify JVM PID

 

Different Operative Systems have different methods to retrieve the PID of the JVM.

Java SDK however offers the statement “jps” to retrieve the list of all JVM actually in execution on the machine.

 

 

Agent injection

 

Agent injection is executed using the following statement:

 

<path-to-jdk>/bin/java -classpath SpyGlassAgent.jar;"<path-to-jdk>\lib\tools.jar" by.concept.spyglasstracer.OnTheFly <pid>

 

The execution will end soon after submitting jar to the application JVM that will show in its log, the following messages:

 

...starting on path <path-to-agent-jar>/SpyGlassAgent.jar

 

 

When class instrumentation is completed, the agent will start to publish metrics that can be retrieved using the SpyGlass Tracer plugin.

 

Requirements

 

SpyGlass Tracer Agent needs the following requirements:

 

  • for dynamic instrumentation, application must be executed on JVM 1.5 (JRE o JDK) or better
  • for on-the-fly instrumentation, application must be executed on JDK 1.6 or more recent
  • certified JVMs tested with SpyGlass Tracer Agent are:
    • SUN/Oracle 1.5, 1.6, 1.7
    • Open JDK 1.5, 1.6
    • JRockit R.28[3]
  • certified Application Servers are:
    • Tomcat 6.x, 7.x
    • JBoss 5.x, JBoss 7.x

 

On-the-fly instrumentation can be applied only on applications running on JDK 1.6 or superior. You cannot use JRE. To avoid any kind of issues it is suggested to use, for agent injection, the same JDK version used for the application start-up.

 

 

 

SpyGlass Tracer Plugin

 

The SpyGlass Tracer Plugin is used for real-time analysis of all the metrics collected and published by the SpyGlass Tracer Agent. This plugin must be installed in VisualVM.

 

The SpyGlass Tracer Plugin (later also as “the plugin”) integrates different dashboards to display all available metrics. The connection between the agent and the plugin is created using an IP socket defined in agent configuration file and specified in plugin setup.

 

 

The executable code of the plugin is contained in the SpyGlassPlugin.nbm file that is part of standard distribution package in all versions of SpyGlass Tracer.

 

To avoid any compatibility issue between Plugin and Agent you’re suggested to use always same release version number for both Agent and Plugin.

 

You can check Agent and Plugin version in the upper right side of plugin screen.

VisualVM

 

The SpyGlass Tracer plugin can be installed on any computer with a SUN/Oracle JDK with release 1.6.0.24 or latest[4].

 

In the “bin” directory of the JDK you can find the file to launch VisualVM: jvisualvm.exe (or jvisualvm on Linux machines).

After start-up VisualVM will display on the left side a tree with all running JVM (note that on Linux machine the visualization is limited by the actual user permission):

 

 

Double clicking one of the local nodes, you can start monitoring it using installed plug-ins:

 

 

The upper progress bar appears on the left/bottom corner showing on going connection.

As soon as connected, VisualVM will show a panel with all of the basic configuration parameters for the JVM:

 

For example in the above view we can see (in the labels on the top of the internal window):

 

  • 4 standard plug-ins (Overview, Monitor, Threads, Sampler)
  • 1 VisualVM plugin (MBeans)
  • 3 SpyGlass Tools plug-ins (JMX Dump, Smart Thread Dump, GC Analyzer[5])

 

Downloading VisualVM

 

VisualVM is freely distributed together with SUN/Oracle JVM with release 1.6 or latest and it is the Oracle suggested platform to develop monitoring tools.

It is strongly suggested to use the SUN/Oracle JVM to execute VisualVM and SpyGlass Tracer plugin.

If you need to download VisualVM you can go to the VisualVM website (http://visualvm.java.net/) and download the VisualVM 1.3.6 installation package.

A strong requirement is that you must run it on a JVM 1.6 or better.

 

Plug-in installation

 

To install the plugin you must select the menu Tools/Plugins:

 

 

Then select the “Downloaded” label and click on the “Add Plugins…” button.

In the browse window select the SpyGlassPlugin.nbm file in the saved directory.

 

 

At last click on the “Install” button to execute the installation.

 

 

NB. With first installation you could receive a message about untrusted certificate. This is related to problems with updating Certification Authority in your JDK. Skip this message and continue.

 

 

 

Overview

 

The first step in application monitoring is to understand if the monitored system is active or not.

This practically means to investigate in real time if your system is working and responding to incoming requests, eventually its performance and how many requests the system is managing.

The working status of a system can be described using four basic parameters:

 

  • Number of requests executed in a time interval (throughput)
  • Average elaboration time (execution time)
  • Number of connected users (active users)
  • Number of threads used to complete actual requests (threads)

 

Soon after its start-up, the SpyGlass Tracer plugin will display this basic information in real time; so you can analyse in any moment and in few seconds the effective status of monitored system:

 

 

 

 

Throughput

 

The throughput dashboard can be used to evaluate the ability of your system to execute and complete incoming requests in a defined time interval.

By default the interval used by SpyGlass Tracer is 5 seconds, but can be easily customized.

 

Incoming requests could be generated by different sources such as web users, JMS processes, scheduled services, web services or direct components calls to the business platform. Further kind of requests could be defined working with SpyGlass Tracer configuration files.

Usually throughput is not constant as we can easily identify different loads in different moments of the life of monitored system. Often you will see very unpredictable values in short intervals, but more regular on long periods (average behaviour).

Throughput could also display the number of user transactions managed in the last sample period.

 

 

In the example above we could see, other than the single spikes, a growing tendency of the throughput following the increase of the users working with the system (sending requests). So these spikes are not meaningful by themselves, but, following the tendency, you can expect a degradation of the performance probably for issues related to resources usage (CPU, Memory, Threads,…).

 

The relevant information related to throughput is that, when it is greater than zero, we can be sure the system is processing and complete requests. But we don’t know if this is done with errors or not.

 

Note that not-completed requests in the sample period are not considered in the throughput (see Pending Requests) but are considered in threads chart.

 

Execution Time

 

The execution time dashboard reports the actual average execution time for each request completed in the sampling interval. There are three interesting values to be monitored:

 

  • Average execution time (central dark line)
  • Lowest execution time (lower vertical limit of the area)
  • Highest execution time (upper vertical limit of the area)

 

 

While watching this dashboard we should mostly care of average execution time (the line) and the upper limit of the area that represents the highest execution time. Often occurs that average time is very low, but there are few requests with very high execution time that need investigation for tuning the system.

 

When you can see that lower/upper and average are identical means that all of the requests have been executed in the same time or, more probably, that only one single request has been executes.

 

Note that uncompleted requests in the sample period are not considered in the Execution time (see Pending Requests).

 

Active Users

 

The active users dashboard displays the number of different Session IDs with some activity tracked in the recent period (usually 2 minutes).

This metric is more useful than the common used “number of active sessions” metric, because users with an active session that are not doing anything normally are not degrading performance of the system.

This metric can be easily related to the throughput and it can help to understand effective impacts of users’ activity on the system.

By default the sample period for active users is 2 minutes, but it can be customized.

Normally this metrics is considered very relevant for any web application.

 

 

In the above example you can see a typical ramp of users connecting to the systems.

 

Note that this metric considers only calls received by the Web Container.

 

Threads

 

This chart can help to understand effective usage of physical resources of the server where monitored application is executed because it displays the number of used threads in the sample interval.

Normally if the number of used threads is much lower than throughput we could say that your application is able to complete all incoming requests very fast and so it is able to reuse the same existing resources without requiring allocations of new ones. This is a confirmation that the system is behaving in the optimal way.

 

 

In the above example we can see that at maximum 4 thread are used and if we map this information with the number of active users (9 concurrent users) and throughput (12 completed request each interval) we can have in few second a rough idea on how your application is behaving. We will also see that with an increasing number of users the number of threads will obviously increase.

 

Note that this metric considers all applicative threads, including JMS, scheduled activities and direct calls to back-end components.

 

Applicative transactions

 

The first step to analyze a transactional system, like a J2EE application, is to identify applicative transactions.

The applicative transactions represent all the operations required to complete a call, posted by a user or by another system that is able to change the status of the system. We can easily identify it because it is able to logically move the system between two coherent statuses on the basis of the called (transformation) function and the received data (parameters).

Form the user perspective, he/she cannot understand if the transaction happened in one or more servers: he/she only knows that, after its completion, the application status is updated.

 

Among all of applicative transactions, business transactions are more relevant, because the user is able to see a business meaning related to the executed operations. That’s why it could be very useful to be able to monitor the behaviour of your system following the business transactions such as delivery a file or print an invoice.

Keep in mind that, obviously, not all business transactions have the same relevance and also that a business transaction could require different applicative transactions to be completed (consider, for example, a wizard to fulfil a form and then print its data).

 

 

 

 

What is a trace?

 

A trace is a group of technical information associated to an end-to-end transaction, i.e. from the first step of its execution (user request) till the last one (delivery of the response) after that we can consider the transaction completely executed and completed. It is clear that a trace can cross all of application layers from a web container, in example, through all business layers.

Monitoring efficiency of an application system means, primarily, to monitor its trace performance.

So we can consider a trace as the basic/minimal monitoring item able to identify completed requests and their behaviour in terms of performance and/or resource usage.

 

Each incoming request can be seen as a trace, but often some traces have not a real applicative meaning, but they are micro-components, or accessories to other applicative functions[6]: for example you can imagine the update of a combo-box in a common AJAX interface after selecting an upper level information (Country - > States).

 

 

The above image displays the list of traces collected by the agent and their related information: after selecting a specific trace in the above panel, in the lower one you could see related details.

 

Information related to a trace

 

A trace can be described using different information items:

 

  • id of the trace
  • timestamp of the execution
  • layer of the trace
  • url: a generic description of the request type
  • client ip: ip address of the request origin
  • thread: the reference to the thread used for completing the trace
  • user id: user id of the requester (only for web requests and usually the session id)
  • db: the number of SQL statement executed during the trace
  • exceptions: number of raised exceptions
  • duration: in milliseconds
  • memory: estimated used memory in KB[7]
  • favourites: here an icon is showing if this trace has been copied in the favourite trace list (see the running panel)

 

 

Selecting a trace will show many detail tabs for deeper analysis:

 

  • timeline: graphical synthesis of information related to thread state evolution
  • overview: a summary of main calls sorted by total duration
  • tree: the tree of the request that can be explored retrieving where specific calls are executed
  • exceptions: the exceptions raised during execution of the request
  • sql: the list of all executed SQL statements
  • request: the list of all http-request parameters with related values (available only for web requests)
  • thread: metrics related to the execution thread (CPU, user and system time, waiting time and blocked time)
  • lock: eventual locks related to execution thread to monitor lock contention and thread state evolution

 

Traces that trigger alert rules are highlighted in red among the others to make easier to identify them.

 

 

If for some reasons you need to spend more time in trace investigations, you can follow two different approaches:

 

  • select the trace as favourite so it will be permanently copied in the favourite tab list
  • break the connection with the agent using the button in the top right area of the plugin stopping plugin from receiving new traces:

 

 

Eventually, when analysis is done, you could reconnect the plugin to the agent using the same button.

 

 

Timeline

 

You can find Timeline just above the list of traces: it is a coloured horizontal line that represents runtime state during specific sampling intervals. This timeline display the evolution of the trace status along the duration of the transaction.

 

 

Each colour is related to a status:

 

  • green: RUNNABLE
  • yellow: WAITING
  • orange: TIMED_WAITING
  • red: BLOCKED

 

NB. The timeline could be seen as a simplified representation of the information available in details in the “thread” and “lock” panel described below.

 

Overview

 

The Overview panel displays all of the instrumented calls executed during the selected trace, sorted by decreasing inherent time. The “inherent time” is the execution time of the component inside itself, including the not-instrumented calls but without the instrumented sub-calls.

To make easier reading the list, the most time consuming components are listed first.

If the same component is traced many times, all executions are grouped together and the value of the “calls” column can help you to understand how many executions are considered:

 

 

Available information is:

 

  • method: is the name of the method or the SQL statement
  • inh. time (inherent time): is the execution time of the component inside itself, including the not-instrumented calls but without the instrumented sub-calls
  • min, max e avg: are statistical information about executions of the method (meaningful when the same method is executed different times, i.e. calls is greater than one)
  • calls: is the number of executions of the same method that are grouped
  • %: is the relative quota of the total execution time of the trace spent in that method

 

 

Tree

 

The “tree” panel displays the calls related to the selected trace, using their chronological sequence and hierarchy that you can navigate with a drill-down approach.

 

 

Available information is:

 

  • method: is the name of the method or the SQL statement
  • inh. time (inherent time): is the execution time of the component inside itself, including the not-instrumented calls but without the instrumented sub-calls
  • duration: is the total completion time of the method including all sub-methods
  • %: is the relative quota of the total execution time of the trace. Here the information is displayed graphically: grey is the total execution time of the transaction (100%), orange the relative duration time of the method, red is the relative inherent time of the method
  • calls: is the number of executions of the same method call
  • offset: is the time in milliseconds of the start of the method from the start of the trace

 

Few comments about the graphic bar of the percentage: normally methods with more visible red bars are methods more relevant in the selected trace; instead the orange bars help you to understand where the longest methods with hidden sub-methods are.

 

 

Exceptions

 

The “exception” panel shows exceptions raised while executing the selected trace:

 

 

Available information is:

 

  • timestamp: when the exception is raised
  • url: a generic description of the request type
  • exception: name of the exception
  • On the right side you can see the stacktrace for the selected exception

 

It is relevant to highlight that some of the raised exceptions could be trapped by the code of the application and correctly managed, so they could not be visible anywhere (log, user,…)[8] and could not represent an applicative problem, but only a bad code implementation.

 

Sql

 

The “sql” panel shows all information related to database statements:

 

 

Available information is:

 

  • timestamp: when the statement is executed
  • event: event type
  • connection ID: id of the used JDBC connection
  • description: executed SQL statement
  • duration: statement duration

 

Request

 

The “request” panel shows information about request parameters received when the trace starts:

 

 

Available information is:

 

  • name: parameter name
  • values: parameter value
  • size: estimated parameter size in bytes

 

Use these values when you want to understand suspicious behaviours.

 

Thread

 

The “Thread” panel shows the statistical information related to the execution times of the selected trace.

 

 

Available information (in values and percentages relative to trace duration) is:

 

  • Duration: total completion time of the transaction
  • CPU: CPU time used to complete the request
  • CPU user: not-privileged CPU time (application or user)
  • CPU system: privileged CPU time (system)
  • Blocked: time, while executing the trace, when the thread is in BLOCKED status
  • Waiting: time, while executing the trace, when the thread is in WAITING or TIMED_WAITING status

·        DB Time: time needed to complete all SQL statements

 

With long transactions, these information are useful to understand where (i.e. “doing what”) time is consumed. Particularly you can understand if transaction is slow for accessing shared resources or bad synchronization locks.

 

 

Lock

 

The “lock” panel shows reasons why thread status is not RUNNABLE. Sampling thread a new line is added if the status is BLOCKED, WAITING, TIMED_WAITING with related information (lock, lock owner and duration)

 

 

Locks are recorded following a dedicated sample period, defined in the configuration file (usually 200 milliseconds).

 

Available information is:

 

  • offset: is the time in milliseconds of the lock from the start of the trace
  • status: is the status of the thread as recorded during the sample period
  • lock name: is the name of the blocking lock
  • lock owner: is the name of the thread that eventually is blocking the execution of the selected transaction
  • blocked time: persistence time in BLOCKED status during the sample period
  • waiting time: persistence time in WAITING or TIMED_WAITING.
  • On the right side you can see the description of the selected lock

 

Collected information will simplify analysis and solution of problems related to thread synchronization.

The trace panel

 

SpyGlass Tracer can automatically identify and trace different transaction types identify the context (layer). Actually trace recognized types are:

 

  • Servlet and JSP in Web Container
  • Asynchronous transaction on JMS queues: Message Driven Bean
  • Scheduled transactions: Quartz job
  • Calls to backend components: Session Bean and Entity Bean
  • RMI calls

 

However different other transaction types could be managed, defining them towards parameters in the configuration file.

 

SpyGlass Tracer has four different panels where traces are displayed:

 

 

  • Running: displays in real time Agent published traces. The list is updated using FIFO logic.
  • Pending: displays in real time traces not-completed at the end of the sample period. This panel is used to identify slow traces, before they are completed.
  • Critical[9]: displays traces that trigger at last one alert. Obviously these trace could be the most candidate for an optimization and tuning analysis.
  • Favourites: displays traces that have been marked by user in the “running” panel, for further analysis.

 

 

Running

 

The “running” panel shows the most recent traces received by the monitored application.

By default the list is configured to show at maximum 200[10] traces that are listed following a FIFO (First In First Out) policy.

 

 

Available information is:

 

  • Id: to identify of the trace
  • timestamp: time of the execution (completion) of the trace
  • layer: the layer associated with the trace
  • url: a generic description of the request type
  • client ip: address of the requester
  • thread: to identify the thread used for the execution
  • user id: reference of the user who send the request (available only for web application and usually a session id)
  • db: number of executed SQL statements
  • exceptions: number of raised exceptions
  • duration: duration of the trace in milliseconds
  • memory: estimated used memory in KB[11]
  • favourites: select star icon if you want to copy this trace to favourite tab list

 

In the lower part, this view can show all of the detail panels described above.

 

Pending

 

The “pending” panel shows all of traces that at the end of sample period are not completed. This view is useful to identify slow transactions and start an analysis to understand the reason for the occurring delay:

 

 

Available information is:

 

  • started at: is the timestamp when the request has been received (start of the trace)
  • layer: the layer associated with the trace
  • url: a generic description of the request type
  • client ip: address of the requester
  • thread: to identify the thread used for the execution
  • user id: reference of the user who send the request (available only for web application and usually a session id)
  • pending time: time of the execution of the trace

 

In the lower part none of the above described detail panels are available.

 

Critical

 

The “critical” panel is automatically fulfilled with all of traces that for some reasons trigger at last one alert defined in the configuration file[12].

 

 

Here information is the same available in the “running” panel.

 

Favourites

 

The “favourite” panel show all of the traces previously selected during the real time analysis for further investigations.

Use the “star” icon in the “running” panel to add the traces in this list.

.

 

 

Information available here are the same available in the “running” panel.

 

The trace toolbar

 

In the trace panel is available a toolbar to activate some useful commands.

 

 

Available commands are:

 

  • clean: let you remove all information captured and available in all of the panels, and start the collection from scratch. It is exactly like to start a monitoring session from the beginning.
  • copy trace to clipboard: let you copy in the system clipboard an XML text with all of the information of the selected trace. Useful to export trace information elsewhere for analysis.
  • export session trace table: let you save all of the information in the “running” panel in an external CSV file. Useful to have this information elsewhere for analysis.
  • export trace: : let you export in HTML format all of the information of the selected trace. Useful to have this information elsewhere for analysis.

 

Profiler

 

The Profiler section of the SpyGlass Trace plugin displays different trace statistics collected while monitoring the application. In particular, the following panels are available:

 

  • url: execution calls aggregated at url level
  • hotspots: execution calls aggregated at instrumented method level
  • memory: memory usage aggregated at url level
  • cpu: CPU usage aggregated at url level
  • waiting: thread waiting status aggregated at url level
  • blocked: thread locked status aggregated at url level
  • sql: execution times of SQL statements aggregate at statement level
  • connections: usage of JDBC connections
  • exceptions: raised exceptions

 

The objective of this section is to offer, with an easy and quick view, information useful to identify areas where some actions could improve performance of the system:

 

  • slow calls
  • critical method calls
  • memory consuming calls
  • cpu intensive calls
  • waiting and blocking impacts on calls performance
  • critical sql statements
  • JDBC connection usage
  • occurring exceptions

 

 

Url

 

The “url” panel let you monitor the relative weight of each url captured during monitoring:

 

 

This information can help to optimize monitored application showing the more expensive transactions and/or the most used ones.

Available information is:

 

  • url: generic name of the call
  • duration: sum of durations of captured calls in the group
  • min, max, avg: some statistical information about durations of the group of calls
  • calls: number of calls in the group
  • %: percentage of total execution time of calls in the group

 

Using this table you can easily identify slow or most common user requests. Obviously these are the best candidates for optimizing your system towards an application tuning.

 

Hotspots

 

The “hotspot” panel let you identify the instrumented methods that give major contribute to the response time of your system.

 

 

This is another way to understand how to optimize the performance of your system.

Available information is:

 

  • method: generic name of the method
  • inh. time: sum of own durations of the instrumented methods in the group
  • min, max, avg: basic statistical information about inherent time of the group of methods
  • calls: number of instrumented methods in the group
  • %: percentage of total execution time of the instrumented methods in the group

 

This table displays the most critical methods in the effective usage of the application: improving response time of most relevant methods (by number or by time) will produce as effect the improvement of the overall efficiency of the system.

 

Memory

 

The “memory” panel let you identify calls that consume a lot of memory resources.

 

 

Available information is:

 

  • url: generic name of the call
  • memory: sum of single allocated blocks of memory of the calls in the group
  • min, max, avg: basic statistical information about memory allocated of the group of calls
  • calls: number of calls in the group
  • %: percentage of total allocated memory of calls in the group

 

This table is fundamental to identify calls that need optimization on object allocation and memory consumption: improving memory usage leads to improve the general behaviour of the JVM, reducing the number and related garbage collection time with obvious improvement on the response time of all calls.

Analyzing memory usage, it is often possible to identify pour performance algorithms or not-optimal code solutions that normally are not visible and not easy to find.

 

Cpu

 

The “cpu” panel let you identify calls that mostly consume CPU resources.

 

 

Available information is:

 

  • url: generic name of the call
  • cpu: sum of single CPU usage times of the calls in the group
  • min, max, avg: basic statistical information about CPU usage times of the group of calls
  • calls: number of calls in the group
  • %: percentage of total CPU usage time of calls in the group

 

This table is fundamental to identify calls that most use CPU time.

CPU time is a limited resource and it is really important to optimize its usage to avoid wastes or spikes, related to not efficient algorithms.

 

Waiting

 

The “waiting” panel let you identify “slower” calls influenced by threads passing in status WAITING or TIMED_WAITING.

 

 

Available information is:

 

  • url: generic name of the call
  • time: sum of different time-frames where call execution threads passed in status WAITING or TIMED_WAITING
  • min, max, avg: basic statistical information about times where call execution threads passed in status WAITING or TIMED_WAITING
  • calls: number of calls
  • %: percentage of total time in which thread of the group of calls passed in status WAITING or TIMED_WAITING

 

This table is useful because with a single view you can easily identify all functionalities that spend meaningful time waiting for something.

 

Blocked

 

The “blocked” panel let you identify calls that have been in status BLOCKED.

 

 

Available information is:

 

  • url: generic name of the call
  • time: sum of different time-frames where threads (o the calls of the group) have been BLOCKED by synchronization locks
  • min, max, avg: basic statistical information about times in which thread of the group of calls have been BLOCKED
  • calls: number of calls in the group
  • %: percentage of total time in which thread of the group of calls have been BLOCKED

 

This table is useful because with a single view you can easily identify all features that have slow performance because they’re concurring with other threads on the same resources (locks), and so they go in BLOCKED status waiting that the resource gets free.

To fix this concurrency issue you need a more efficient synchronization policy. This is a basic requirement for system scalability.

 

SQL

 

The “sql” panel let you see the effective impact of each SQL statement on the total time spent managing queries on your database.

 

 

Available information is:

 

  • query: actual SQL statement
  • duration: sum of all execution time spent by different calls using this statement
  • min, max, avg: basic statistical information about times spent executing the statement
  • calls: number of calls using this statement

 

Using this table it is very easy to find slower and most executed queries. This is the ideal basis to work for optimizing the SQL area of your application.

 

Connections

 

The “connections” panel let you see in real time each active connection and the code section where they’re used for the last time.

This information is useful to identify possible connection leaks.

 

 

Available information is:

 

  • timestamp: when the connection is used for the last time
  • trace ID: reference to the last trace using the connection
  • connection ID: reference to the connection
  • status: connection status
  • class: the java class implementing the java.sql.Connection used in the selected connection
  • details: driver JDBC url (if available)
  • stacktrace: last statement stacktrace executed on selected connection

 

In-use connections are highlighted in yellow.

This table let you monitor JDBC connection used by the application. You can also track eventual not closed or leaked connections and have a clear indication in the source code position where the connection was used.

 

 

Exceptions

 

The “exception” panel let you see exception tracked by the agent.

 

 

Available information is:

 

  • timestamp: when the exception has been raised
  • url: execution section where the exception has been tracked
  • exception: exception class
  •  

On the right area you can see also the effective stack trace output:

 

This panel is useful to inform you when application code raise exceptions without opening the log file: here everything is collected in real time and displayed in a clear way. This allows you to focus on the content of the exception and not on the way to retrieve it.

The agent is able to track also exceptions that are not available in the application log, because they’ve been trapped or managed by the code.

 

JVM

 

The JVM section of SpyGlass Trace plugin let you see the main metrics related to JVM behaviour:

 

  • Memory spaces
  • Loaded classes
  • Allocated objects
  • Promoted objects
  • CPU usage
  • Garbage Collectors activity (GC)

 

 

This section has its own sample time that you can define in the configuration file.

By default this value is defined as one second.

 

Behaviour of the memory

 

One of the most relevant characteristics of Java is the completely automated memory management. This means that it’s not easy for the developer to optimize the memory usage because this behaviour is hidden by JVM. But an optimized behaviour of the memory is one of the ways to improve performance and scalability.

Real time analysis of memory is one of the key approaches to solve these problems: SpyGlass Trace can track the most relevant metrics to help you to understand if memory could be a bottleneck for the application performance.

 

The first “space” to explore is called “Eden” and this is the area of the memory where newly created objects are loaded:

 

 

Normally the Eden is 1/3 of the total available memory allocated.

When Eden space is fulfilled, a “small” garbage collection is executed to remove unused objects and to copy the active ones in the “Survivor Space”. Usually this activity is very fast and not blocking.

So, we could say that an efficient application should use many object with a short life-cycle so after a “small GC” survived objects are very few and Eden is able to keep a lot more.

 

To have an idea of the allocation object rate in the Eden, the plugin has a dedicated graph “Allocation rate” that shows allocated objects in megabyte per second:

 

 

But SpyGlass has also another graph that shows what is happening in the Survivor space:

 

 

If this space is empty after a “small GC” means that the garbage collector is able to remove most of objects because unused and so you can expect to have no promotion to Old Generation Space.

 

The survived objects will remain the Survivor Space for a certain number of small GC, after that they are considered as long life objects and so JVM promotes them in the “Old Generation” (also known as “Tenure Gen”).

Sometimes the JVM tries to expand (or compact) the Survivor Space using part of the Eden to limit object promotion: this explains the evolution of the memory dimension spaces during the life of the application.

 

SpyGlass can display also what’s happening in Old Generation (Old Gen) with dedicated graphs that shows promoted objects in megabyte per second:

 

 

Here each spike shows a promotion of some objects in the Old Generation.

 

In the Old Generation space you should find only persistent objects that normally are usually also large sized objects.

 

 

When the Old Generation is exhausting a “full GC” is executed and both Eden and Old Generation are reorganized.

This operation is a heavy activity because it’s dealing with big objects and memory fragmentation: sometimes the JVM is so busy to working with “full GC” that the application seems frozen for some seconds depending mainly on the dimension of the heap memory and on the number of processors involved in the activity.

 

The Permanent Generation (Perm Gen) space is used to keep static classes and other static information. Normally, after the complete setup, you can expect to have a constant size limiting attention required for it. But sometimes, particularly if the application is using some frameworks based on reflection, it can be growing.

 

 

When the Permanent Generation is exhausting, as per Old Generation, a “full GC” is executed to reorganize this space too. During full GC, dynamic classes are removed from this space. In these cases, it is important to setup a correct size of permanent generation spaces to limit full GC.

 

The Code Generation space is the area where are stored all information related to compiled classes byte-code. Normally this is not a critical space but it may happen to be filling it limiting the ability of JVM to compile in native code some java calls..

 

 

At last we should monitor how many classes are loaded into memory using the Class Loading graph.

This information can be related with the Permanent Generation to understand the behaviour of the application.

 

 

Behaviour of the CPU

 

The other relevant parameter (other than memory) to be monitored in your system is CPU usage.

SpyGlass Tracer can capture in real time the real CPU usage by the JVM (JVM process) drawing effective (red line) and weighted average (green line) usage.

 

 

 

The second graph can let you map CPU usage with the garbage collector activity, so you can understand if the spikes are related to application activity or JVM events:

 

 

In the above graph, an orange line is used for the “small GC” and purple line is used for the “full GC”.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Layers

 

The “layer” section let you see the behaviour of the different tracked components (standard and not)[13].

SpyGlass Trace considers as standard components the following:

 

  • Servlet: all methods extending HttpServlet
  • JSP: all JSP pages
  • Action Struts 1.x: all classes extending ActionStruts
  • Business: Session Bean 2.x e 3.x
  • Entity: Entity Bean 2.x
  • JMS: Message Driven Bean
  • Scheduled: Job Quartz
  • Mail: email sender using javax.mail
  • Persistence: database activities

 

 

 

Using the configuration file you can eventually specify new layers and new group components to be monitored (only for Pro version or above).

By default the sample interval has been defined in five seconds, but you can change it as per your needs in the configuration file.

 

Each panel has three areas:

 

  • Mask with 3 levels selector (group/item/presentation) to define the graph to be shown
  • throughput graph with number of completed calls
  • execution time graph with average execution times (in ms)

 

Use the selector to define which information you want to see and then choose the aggregation level you want to use for grouping data.

 

As example let’s see the “business” layer, working with Session Beans.

The first level is the highest aggregation level (all methods of all Session Beans):

 

 

At this level of aggregation, the graphs can display the total number of completed calls (throughput) in the sample interval and the average execution time.

 

Using with the “group” selector you can define “how to aggregate” these data: working or restricted group, single component or particular predefined aggregation highlighted by the brackets.

 

 

To investigate about the spikes in the aggregate view or the most relevant for execution time, you can use the predefined aggregation that in the “group” selector can be distinguished by the brackets “()”.

 

Top Peak 10 analysis

 

This predefined selector let you display only the components that generate highest spikes displayed in the aggregated graphs (when selection is “all”):

 

In the above graph we have highlighted the spikes in execution time because we want to investigate who is responsible for that. To analyse them you can simply select the “top peak 10” option so the graph will be updated with only the 10 most relevant components:

 

 

Now the graphs are not aggregate anymore, and, moving the mouse, can show the values item by item, so it is easier to identify the single components and understand contributors of the single spike.

 

NB: the colour of the single lines may differ because they are related to spikes’ height order.

 

Top aggregated 10 analysis

 

This predefined selector let you display only the components that most contribute during the whole investigation time-frame. Practically the sum of all values of different components is updated every sample and then the 10 highest are selected and displayed.

This way of selection is very useful for all analysis when fast components are executed many times and so they aggregated are relevant in the performance of the system.

 

 

The visualization style is similar to the “top peak 10”, but normally the components displayed in this aggregation are different.

 

Incremental Graph

 

The “top aggregated 10” can be enhanced using the incremental presentation.

In fact using this option you can see the whole impact of a specific component in the visualized time-frame, because every point of the line is the actual value added to all previous values.

 

Using the “timeline” presentation we can have these graphs.

 

 

How can we understand the component with the highest impact?

Switch the presentation to “incremental” and see how the graphs evolve.

 

 

As you can, now it is easier to identify more impacting calls (by invocation number or total execution time in the current time window).

 

 

This presentation type is useful when you have some components with low response time, but invocated so many times that a little optimization in their code can produce high impact on the overall application performance.

 

 

Specific component analysis

 

You can also see the behaviour of a single component, if active in the selected timeframe, selecting it in the “group” selector:

 

If one component is selected, also the “item” selector will be active allowing you to specify which method to show in the graphs:

 

 

As already seen at component level, there are available the predefined aggregated selectors impacting only on the items level: “top peak 10” and “top aggregated 10”.

 

 

Further you can obviously use the “timeline” or “incremental” presentation as above:

 

 

 

Plugin Requirements

 

The SpyGlass Tracer Plug-In has the following requirements:

 

  • SUN/Oracle JVM 1.6 or latest with VisualVM
  • Any JVM 1.6 or latest and VisualVm manually installed

 

 

 

VisualVM Configuration

 

Sometimes you’re required to modify VisualVM configuration by specifying optimized Java options.

VisualVM can be configured using the visualvm.conf file, available in the directory:

 

%JDK_HOME%\lib\visualvm\etc\visualvm.conf

 

If you need to manage a greater number of metrics, we suggest increasing memory associated to VisualVM. To do that, you have to add the parameters of the new configuration using the “-J” prefix to the default_options. For example:

 

default_options="-J-client -J-Xms24m -J-Xmx256m -J-Dsun.jvmstat…

 

Use same approach any time you need to add Java options.

Recorder and Player

 

The SpyGlass Tracer agent has also two particular features to store on the file-system all of the metrics collected during a specific monitoring time-frame and then visualize them offline without the real application execution.

 

  • Recorder: to record all collected information and to save them in a compressed file
  • Player: to read previously recorded information and publish them so the plugin can show them (system simulation)

SpyGlass Tracer Recorder

 

SpyGlass Tracer let you save all collected metrics in a compressed ZIP file you could use to simulate (reproduce) the behaviour of the application during the monitored time-frame.

The recording feature works like the plugin, listening to the Agent port, but, instead of displaying the information in a User Interface, it saves all of the collected metrics in a file.

 

The recorder is part of the agent jar and can be started with the following command:

 

 

java -cp SpyGlassAgent.jar by.concept.player.Main recorder <ip-address> <port> <recording-duration-in-seconds>

 

After starting, the recorder connects to the agent (already active) and save all of the information in a temporary directory.

At the end of recording, the information in the temporary directory is compressed in one single ZIP file.

This file is the only think you need to start the simulation with the player.

 

 

SpyGlass Tracer Player

 

SpyGlass Tracer let you simulate the execution of the application in the monitored timeframe starting from the recorded metric file.

 

The player is part of the agent and can be started with the following commands:

 

java -cp SpyGlassAgent.jar by.concept.player.Main player <ip-address> <port> <archive-name>

 

When the player is started, you can connect the SpyGlass Tracer plugin to the specified IP address/port to visualize all information like it was a real time monitoring session of the monitored application.

 

 

 

 

How to request support

 

If needed, you can request support for any issue related to the installation of the tool.

If you have a commercial version of SpyGlass Tracer, you should have bought a support ticked to engage SpyGlass Tools team in solving issue on any aspect of product usage:

 

  • How the product do the different activities
  • How to install
  • How to manage specific framework
  • How to manage specific components

 

If you have no support ticket or a free version we’ll do our best to support you.

To have the support you need to submit also the following information:

 

  • Description of malfunction and expected behaviour
  • Starting script of JVM
  • Log file of the agent
  • Log file of the VisualVM plugin
  • Eventual exceptions related to the agent available in the application log

 

Pack everything in a zip archive and send it to support@spayglasstools.com.

Where log file can be found

 

If you need to submit the information to support team, you need to send the log of the tool.

In the agent installation directory, there is a sub-directory “log” where all agent operations are logged.

 

This can be useful for a primary analysis.

Eventually, our team could ask you to increase the log level to SUPPORT.

To do so, you have to edit the configuration file like that:

 

<log destinations="file" level="SUPPORT" development="0" writeClass="0"/>

 

When done, restart the server and check again the conditions where the malfunction should happen.

 

This SUPPORT option should save all available information and it is defined just for troubleshooting when an issue cannot be fixed. However this configuration consumes a big amount of resources so it is suggested just for an analysis in case of error of the tool.

 

If you need to retrieve the plugin logs, you should look in the following position

 

<user>\<application data>\.visualvm\7\var\log\messages.log

 

even if, it could vary on different releases and installation configurations.

 

 

 

Contacts and Training

 

See SpyGlass Tools website for any updates (www.spyglasstools.com) or contact as at info@spyglasstools.com.

 

 

 

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



[1] Insert a Java regular expression

[2] Insert a Java regular expression

[3] IBM JVM will be supported soon

[4] For previous Sun/Oracle JVM version or other Java vendor version you have to install VisualVM 1.3.6.

[5] To eventually download these and other plug-ins go to the SpyGlass Tools website (http:// http://spyglasstools.com/products/)

[6] The consequence is that monitoring business transactions may require a complex data aggregation.

[7] This information is available only for SUN/Oracle JVM 1.6.24 and latest

[8] If you want to monitor some particular exceptions, you should include them in the list of the instrumented classes in the configuration file.

[9] Keep in mind that the tool can be configured to exclude too fast transactions, so you can reduce information (and noise) to be shown.

[10] This number can be easily configured but keep in mind you should increase memory used by the plugin to efficiently manage them.

[11] Available only for SUN/Oracle JVM, release 1.6.0.24 or better.

[12] Actually only total execution time can be defined as threshold.

[13] The standard groups don’t need to be defined in configuration file.


External Links

Download

related links

External Links

Ti informiamo che, per migliorare la tua esperienza di navigazione su questo sito, sono utilizzati cookie tecnici e cookie di terze parti e non cookie di profilazione.
Alla pagina privacy avere maggiori dettagli ai sensi dell'art. 13 del Codice della privacy.
Cliccando su 'Accetto' i cookie saranno attivati Accetto