Introscope .NET Agent Troubleshooting and Best Practices

Document ID : KB000111638
Last Modified Date : 04/01/2019
Show Technical Document Details
The following is a high-list of techniques and suggestions to employ when troubleshooting the below Introscope .NET common performance and configuration issues:

- NET Agent Installation problems
- Instrumentation not working
- NET app crash, broken or stop responding
- Agent overhead - high CPU and memory consumption
- Application slow response time.
APM 10.x

1) Make sure you are using the correct .NET agent installer package:

Use the 64bit Agent installer in case you are planning to monitor x64bit machine and you need to monitor 32 and 64 bit applications.
Important: After installation, make sure to restart the .NET application. For IIS run : “iisreset”

2) Review the .NET install log:
-If you use the .exe : the installer log is located in the folder from where you launched the installer: IntroscopeDotNetAgentInstall64.log
-If you use the .msi, the installer log is located in the %temp% folder
For example
MSI (c) (90:5C) [04:49:31:746]: Product: CA APM .NET Agent (64 bit) -- Installation operation completed successfully.
MSI (c) (90:5C) [04:49:31:746]: Windows Installer installed the product. Product Name: CA APM .NET Agent (64 bit). Product Version: Product Language: 1033. Manufacturer: CA Technologies. Installation success or error status: 0.

3) Verify that the correct version of wily.Agent.dll has been registered in the GAC (c:\windows\assembly).
For example when using 10.5.2
User-added image
If not listed, you need to register it manually, drag and drop AGENT_HOME\wily\bin\wily.Agent.dll into C:\Windows\assembly

4) Verify that the below environment variables exist:

Open Command Prompt as Administrator, execute the command: set
The output should be similar to the below:
NOTE: To disable the .NET agent you can:
a) uninstall the Agent from “Programs and Features”
b) set environment variable : Cor_Enable_Profiling = 0x0 - this will ensure that the APM agent code is not invoked when .NET CLR is launched

5) Make sure permissions to the AGENT_HOME have been set accordingly

If you are trying to instrument a Windows service or standalone app, make sure to run:
<AGENT_HOME>\wily\wilypermission.exe <AGENT_HOME>\wily <your application>
For example:
<AGENT_HOME>\wily\wilypermission.exe <AGENT_HOME>\wily mytestapp.exe

6) Verify that the .NET agent is attached to the .NET process using : tasklist /m wily*

For example:
Image Name                     PID Modules
========================= ======== ============================================
PerfMonCollectorAgent.exe        1300      wily.NativeProfiler.dll, wily.Agent.dll
w3wp.exe                                    4000      wily.NativeProfiler.dll, wily.Agent.dll,
7) If you are using CLR v4, set introscope.nativeprofiler.clrv4.transparency.checks.disabled=true.

The .NET 4 CLR has some additional checks on certain assemblies which may invalidate the instrumented code, thus throwing VerificationException when accessing the application. This agent setting will suppress these checks when set to true.

8) Check if the Agent logs have been created:

After the .NET has been installed, make sure to:
- restart the .NET application. For IIS run : “iisreset”
- put activity in your .NET application
By default the Agent creates the below standard logs:
NOTE: If Native profiler logs are created but no autoprobe & Introscope Agent logs, this means that the agent has not started because it did not find the triggering method, to resolve this issue uncomment the below property and set it to true to use the generic trigger:

9) If instrumentation still does not work, try to configure the agent to instrument all available .NET applications

Comment the below property, it will help you confirm that the problem is related to the specific .NET application.

Important: Remember that for .NET Standalone applications, the application name is available from task manager.
For example, for the below DummyWinApp.exe application, you would need to update the monitorApplications property as below
 User-added image

10) Check for possible errors in the the Windows Event viewer > Application log :

“Failed to CoCreate profiler” “The profiler was loaded successfully.  Profiler CLSID: '{D6E0BA92-3BC3-45ff-B9CC-B4B5AB7190BC}
The above message indicates that there is another CLR profiler preventing the .NET Agent from probing the .NET process. Only 1 .NET profiler can run at a time.
The APM .NET Agent GUID is {5F048FC6-251C-4684-8CCA-76047B02AC98}. To resolve this issue you need to Uninstall the other .NET Profiler,

You can run the below regedit qurery to list all the installed .NET Profilers:
-Open Command prompt as Administrator,
-Run: REG QUERY HKLM /f "COR_PROFILER" /s >> apm_netagent_regquery_corproofiler.txt
"System.InvalidProgramException: Common Language Runtime detected an invalid program"  "…cannot be activated due to an exception during compilation".

Turn off “WCFRuntimeTracing” in the webservices.pbd 
Then restart IIS or monitored .NET application.

11) If you are using Ninject, manually update the .NET application configuration to use UseReflectionBasedInjection NinjectSetting.

Make sure the reflection setting is added to all calls to the Kernel init.
For example:
public NinjectDependencyResolver()
kernel = new StandardKernel(new {UseReflectionBasedInjection = true;});
For more information see:
For exact details how to enable UseReflectionBasedInjection, contact Ninject support or communities:

12) If the agent is reporting to the Investigator but you don’t see the expected metrics check if a metric clamp have been reached
a) Check the agent clamp from the Metric Browser, expand the branch
 Custom Metric Host (virtual)
   - Custom Metric Process (virtual)
      - Custom Metric Agent (virtual)(collector_host@port)(SuperDomain)
         - Agents
            - Host
               - Process
                   - AgentName
looks at the values for : “is Clamped” metric, it should return zero(0)
b) Check if the problem is related to the perfom metric clamp (introscope.agent.perfmon.metric.limit) default value  is 1000
[VERBOSE] [IntroscopeAgent.PerfMonService] Metric limit of xx has been reached
You can verify this condition by enable VERBOSE logging in the logging.config.xml, you need to save the IntroscopeAgent.profile for this change in the xml to be consider.
13) In case of high CPU/Memory usage, slow response time or application no longer working after the agent has been enabled:

TEST#1: disable the entire instrumentation

-Open the IntroscopeAgent.profile , set introscope.autoprobe.enable=false
-Stop the Windows Perfmon collector service.
-Restart IIS or the .NET instrumented application.
If the problem persists, contact CA Support
If the problem doesn’t’ persists, proceed with TEST#2
TEST#2:  reducing the Agent Instrumentation:
a) enable back instrumentation :
set introscope.autoprobe.enable=true
b) Turn off Socket instrumentation in the toggles-typical or full.pbd as below :
#TurnOn: SocketTracing
c) Disable WCF/SOAP header insertion in the webservices.pbd as below
#TurnOn: WCFRuntimeTracing
#TurnOn: WebServicesCorrelationTracing
#TurnOn: WCFServerFaultTracing
#TurnOn: WCFClientFaultTracing
#TurnOn: WCFServerTracing
#TurnOn: WCFClientTracing
By disabling WCFRuntimeTracing, the .NET Agent won’t insert correlation ID into WCF/SOAP message headers, if you identify this is the root cause you can can try to switch to use HTTP header for correlation ID insertion with .NET Agent by enabling the HTTP correlation tracers in webservices.pbd, you need to uncomment the  TurnOn: ClientCorrelationTracing

d) Disable the SQL instrumentation:
You can try to:
- reduce the length of SQL statements. The default maximum length captured by the agent is 999. You can modify this by adding the following line to the IntroscopeAgent.profile file: introscope.agent.sqlagent.sql.maxlength=
- disable some sql metrics reporting:
-If the problem persists, you can try turn off SQL tracing in the toggles-typical or full.pbd as below :
#TurnOn: SQLAgentCommands
#TurnOn: SQLAgentDataReaders
#TurnOn: SQLAgentTransactions
#TurnOn: SQLAgentConnections
e) Disable the traces feature temporally by setting
This is hot property there is no need to restart the appserver.

f) If you have updated the introscope.agent.dotnet.monitorApplications, reset the property to the default value, otherwise try to monitor only a subset of applications

TEST#3: reduce Perform collection to prevent possible CPU overhead / spikes
Verify whether the CPU overhead would be corresponding to certain perfmon reporting / browsing or simply an issue with perfmon counter metric explosion by tuning the perfmon agent configuration in IntroscopeAgen.profile.
a) Set introscope.agent.perfmon.category.browseEnabled=false in IntroscopeAgent.profile
This will help you confirm whether disabling perfmon category browsing would significantly reduce the CPU overhead
b) Set introscope.agent.perfmon.metric.pollIntervalInSeconds=150 to see if the CPU spikes would be also occur around every 150 seconds
c) By default we gather the below perfom counters, depending on the number of your applications and design this settings could instruct the agent to gather a huge amount of metrics causing a overhead:
introscope.agent.perfmon.metric.filterPattern=|Processor|*|*,|.NET Data Provider*|*|*,|.NET CLR*|{osprocessname}|*,|.NET CLR Data|*|*,|Process|{osprocessname}|*,|ASP.NET*|*
For testing purpose try:
1.Stop the PerfMonCollectorAgent service
2.Open the IntroscopeAgent.profile,
introscope.agent.perfmon.category.browseEnabled=false introscope.agent.perfmon.metric.pollIntervalInSeconds=150

3.Start the PerfMonCollectorAgent service
What to collect if the problem persists:
Enable DEBUG logging in the logging.config.xml and save the IntroscopeAgent.profile so the change in the xml is taken into account.
If the application crashed, enable by code logging, set introscope.nativeprofiler.logBytecode=true and introscope.nativeprofiler.logAllMethodsNoticed=true

Try to reproduce the issue and collect the below information

  1. Install logs and Agent logs (if any, AGENT_HOME/wily/logs)

  2. AGENT_HOME/wily/IntroscopeAgent.profile

  3. The result of "systeminfo" command.

  4. The result of "set" command.

  5. Exercise the application, then run "tasklist /m", send the output.

  6. Scrennshot of the C:\windows\assembly folder, listing the wily.*.dll

  7. Screenshot of application events from Windows Event viewer

  8. If the issue is related to the high cup/memory, crash, hung situation, use Debug Diagnostics Tool from Microsoft to capture user dump, which contains both heap and thread snapshots. The following KB has both a download link and usage instructions:   
    Follow the steps described in the below link to capture the performance dumps:
    There are multiple ways to capture dumps on.NET process.  One simple way is to bring up Task Manager, find the .NET process with the memory issue, and then right click on the process to select  Create Dump File  option in its context menu.