Working with CA Support To Troubleshoot a "Crashing" or "Hanging" CA Service Desk Manager Process

Document ID : KB000020303
Last Modified Date : 14/02/2018
Show Technical Document Details

Description:

This document provides the appropriate steps to follow when working with CA Support on a "Crashing" or "Hanging" process issue.

Solution:

Working with CA Support & Troubleshooting a "Crashing" or "Hanging" Process

First Determine if a process "Crashing" or "Hanging"

Many times when a Service Desk process seems to be failing, you may be asked by CA Support if its "crashing" or "hanging" - and it is sometimes difficult to tell the difference. This document seeks to clarify the difference, and provide you with the knowledge needed to be able to differentiate and determine in your case, whether a process has "crashed" or is "hanging." This document is specific to environments where the system is running a Microsoft Windows Based Operating System.

A "Crashing" Process Defined
A Crashing or Crashed process is one that fails in such a way that it either stops running completely, or recycles itself.

A "Hanging" or "Hung" Process Defined
A Hanging or Hung process is one that appears not to be responding, but at the same time, still appears to be in a running state.

To determine if the process has crashed, confirm or answer the following:

  1. After the "crash" does the process still show as running when you run a pdm_status?

  2. After the "crash" does the process still show in task manger in the process list?

  3. In the Service Desk STDLogs, at the time of the "crash" (could also be before, during, or after the occurrence is reported to you), search for the words "died" - and look for any messages with something similar to "xxxxxx process died: restarting" (where xxxxx is a process name such as domsrvr.exe, or webengine.exe).

  4. In the Service Desk STDLogs, at the time of the "crash" (could also be before, during, or after the occurrence is reported to you), search for the words "FATAL" - and look for any FATAL type including an "EXIT", "SIGSEGV" , or "CANNOT ALLOCATE xxxxx BYTES"

If you can answer "No" to #1 and #2, and confirm at least one of the messages in the logs on #3 or #4, then most likely you are experiencing a "crashing" process.

If you answer "Yes" to #1 and #2, and are not able to confirm any of the messages in the logs on #3 or #4, then you are most likely experiencing a "hanging" process.

If a process appears to be in a "hung" state and does not appear to be responding, please confirm this by performing the following steps:

First, run the following command to see if the process responds to a request via the command line: "pdm_diag -a {slump name of process}"

**to get the slump name of the process, you can run the slstat command and pipe it out to a file by running the following command: "slstat > slstat.txt"
Example: If it was a webengine hanging, and you found that the slump name for the failing webengine as per the slstat output is "web:local" you would run the command as follows to see if that webengine process is responding: "pdm_diag -a web:local"

If you receive information back from the process, then the process IS actually responding. If you do not receive information back from the process, and it appears the command is hanging, then the process is most likely in a "hung state" and will not respond with any information.

Then run the following two commands to turn on advanced tracing and logging for the hung process and let it run for about 30 seconds:

"pdm_logstat -n {slump name of process} TRACE"
"bop_logging {slump name of process} -f $NX_ROOT\log\{processname}.out -n 10 -m 20000000 ON"

NOTE: In most cases - it is a good practice to turn bop logging on for all domsrvrs, webengines, and spelsrvrs, even the ones that are not hanging or crashing - this will allow CA Support and Sustaining Engineering to see how other processes are being affected by the hanging or crashing process.

Then turn the logging off by running the following commands:

"pdm_logstat -n {slump name of process}"
"bop_logging {slump name of process} OFF"

Example:
Using the same example above for a hanging webengine process, the syntax would be as follows: "pdm_logstat -n web:local TRACE" "bop_logging web:local -f $NX_ROOT\log\weblocal.out ON"

**the output files for this logging will be included in the Service Desk log directory, so they will be uploaded along with the log directory to the support issue once all required files, output, and info has been gathered.

Steps to take once you have confirmed that you have a "crashing" or "Hanging" process:

It is always best to have a crash dump file generated for a "crashing" or "hanging" process. Once a crash dump file is generated, your CA Support Engineer will work with the Sustaining Engineering Team to try and pinpoint the probable cause of the crash or hang.

Crash dump files can be generated in multiple ways - depending on your environment, and whether the process had been determined to be "crashing" or "hanging."

Use the chart below to help you decide which option is most applicable for you:

Figure 1

What to do after the dump file has been generated:
Once a dump file has been generated for a crashing or hanging process, please fill out a "Crash Dump Template" as supplied to you by CA Support, shown below. This will serve as a checklist for you to gather all the required files, information, and data needed by CA Support to analyze the dump file(s) and help pinpoint the source of the crash or hang. The following is a copy of the Windows Crash Dump Template document - which should be supplied to you by CA Support (separately from this document):

Windows Crash Dump Template

  • Please fill this out as best you can after you capture a dump file for a dying, crashing or hanging process.
  • Simply insert your answers/information to these items in-line below each item.
  • You may cut and paste this template into the issue via support.ca.com, or you may save it and upload it to the issue as an attachment.
  • If you are unsure about a specific item - please ask your CA Support Engineer for clarification.
  1. Please review the stdlog file that captures the timeframe of when the dump occurred and supply us with the following information:
    • Was the process ended by a SIGSEGV message, a SIGBUS message or any another "FATAL" type message?
    • What is seen in the stdlog file right before, during, and after the time the process crashed?
    • What errors were reported in the logs right before, during and after the time when the process crashed if any?
    • If the dump was generated using DebugDiag, ADPlus, ProcDump, or the Microsoft Process Dumper Utility, then simply upload the .DMP file that was generated by the utility used to generate the dump, and specify the filename of the dump file (or zip file that contains the dump file) here.

      Note: If the dump file was generated by Dr. Watson - please attach the 'User.dmp' and 'drwatson.log' log files to the issue.
  2. Location of where the 'User.dmp' file was first found?

  3. Please specify the date/time the dump file was generated.

  4. How many times has the failing process crashed since first reported?

  5. Are there any possible reproducible steps noted prior to when this crash/hang occurs?

  6. Supply a "Directory Listing" output of the Service Desk root directory (NX_ROOT) by using a command line window, navigating to the directory where Service Desk is installed, and running the command "dir/s > dir_listing.out" - this will generate a file called dir_listing.out. Please upload this file and specify the name of the file (or zip file that contains the dir_listing.out file) here.

  7. Please run the command "winmsd" - this should pop up a window with system information. Click on the file menu and select save to save the output to a file. Please upload that file and specify the name of the file (or zip file that contains the output file) here. NOTE - on some environments, for security reasons, winmsd may not run. In this case, please specify the specs of the hardware, and whether or not it is VMware based, for the system where the dump file was generated, here.

  8. Navigate to the Service Desk\bin directory and run pdm_ident {process name} > pdm_ident.out. - where {process name} is the name of the Service Desk process for which the dump file was generated. If the failing process is javaw.exe - you will need to run a pdm_ident on the sda65.dll file as the javaw process does not contain pdm_ident information. Please upload the pdm_ident.out file, and specify the name of the file (or zip file that contains the output file) here.

  9. Please attach your patch history file ($NX_ROOT/<machine name>.his) to the issue and specify the name of the file (or zip file that contains the history file) here.

  10. Please zip up the entire Service Desk\log directory and attach it to the issue, and specify the name of the zip file here.

  11. Please zip up the Service Desk\site\mods directory and attach it to the issue, and specify the name of the zip file here.

  12. Please upload the Windows event log files, and specify the name of the files (or zip file containing the event logs) here.

***end of crash dump template***

After Filling Out the Crash Dump Template
Once you have generated the crash dump file, and have gathered all required information, files, and data as per the Crash Dump Template document, please upload everything to your CA Support issue. Please be sure to appropriately label the filenames of all uploaded files so that it is easily visible to CA Support as to which file is which. We have found that the best way to do this is to gather all the files and output first, set appropriate file names, then, under each respective item on the Crash Dump Template Document, simply write the name of the file that corresponds with that item if applicable.

Once all the required files and information has been uploaded to the support issue, your CA Support Engineer will review the information supplied - and will then engage the Sustaining Engineering Team to assist in analysis of the dump files.

What should I do if additional dump files are produced for additional occurrences of the same exact problem on the same server?
Sometimes multiple occurrences will produce multiple dump files if the dump files are being automatically generated by DebugDiag, ADPlus or another tool.

To avoid any confusion and "clouding" of your open support issue with CA Support, do NOT upload the additional dump files and logs without talking to your CA Support Engineer first. There is no need to upload multiple dump files for the same problem unless specifically requested by your CA Support Engineer. The CA Support Team may already have found the problem and may be working on possible resolutions or possible code changes to fix it, and adding these files and additional logs, and updates, may only cloud the issue and make it more difficult to review by others.

What should I do if a similar, but not exactly the same problem occurs on the same server?
If you experience a problem that is similar to the previous occurrence, but not exactly the same - say for example the original problem was a hanging webengine process, and now you are experiencing a hanging spelsrvr process, this problem should be treated as a different problem, and a separate new issue should be opened. The same steps that were followed for the original problem should be followed for this new, slightly different occurrence, including filling out the Crash Dump Template Document, and uploading the files and information specific to the new problem, in the new issue.

What should I do if the same problem (as the original issue) occurs on a different server?
If you experience a problem where the same process crashes or hangs, but on a different server, you should follow all the same steps you did to generate the crash dump, and fill out the Crash Dump Template Document with regards to the different server where the new crash or hang has occurred. You may upload the new crash dump, along with the filled out Crash Dump Template Document, and all required information and files to the original issue - however, you MUST make sure that the files are ALL appropriately labeled so it is easily visible that they are from a different server from than the original issue occurred on. The best way to do this is to zip up ALL of the files for this new occurrence on a different server, into one zip file specifically labeled with the second server name, and the date of the occurrence.