websphere-infra: Webspere MQ All

MQSC: indicates a runmqsc command, which can be executed while in runmqsc [QmgrName] or as one line command using:
echo command runmqsc [QmgrName]
on Unix platforms add double quotes:
echo “command” runmqsc [QmgrName]
MQ Display commands

MQ start-Stop commands

MQ status verification commands


backup queue manager in Websphere mq

backup queue manager in Websphere mq

Creating a backup queue manager
1. Create a backup queue manager for the existing queue manager using the control command crtmqm. The backup queue manager requires the following:
* To have the same attributes as the existing queue manager, for example the queue manager name, the logging type, and the log file size.
* To be on the same platform as the existing queue manager.
* To be at an equal, or higher, code level than the existing queue manager.
2. Take copies of all the existing queue manager’s data and log file directories, including all subdirectories, as described in Backing up queue manager data.
3. Overwrite the backup queue manager’s data and log file directories, including all subdirectories, with the copies taken from the existing queue manager.
4. Execute the following control command on the backup queue manager:
strmqm -r BackupQMName
This flags the queue manager as a backup queue manager within WebSphere® MQ, and replays all the copied log extents to bring the backup queue manager in step with the existing queue manager.

starting a backup queue manager
1. Execute the following control command to activate the backup queue manager:
strmqm -a BackupQMName
The backup queue manager is activated. Now active, the backup queue manager can no longer be updated.

2. Execute the following control command to start the backup queue manager:
strmqm BackupQMName

3. Restart all channels

Note: stop the existing Qmgr using endmqm -w, before copying the qmgr date to backup qmgr


websphere-infra: Webspere MQ All

Logging in MQ

=> MQ log are also known as transaction logs.

=> MQ logs consists of two parts
– Data log file (named S0000000.LOG – S9999999.LOG)
– Control log file (named amqhlctl.lfh)

=>Transaction logs are created when you create the Queue Manager. When you choose the default locations, the logs will goes to
– /var/mqm/logs/Queue_Manager in unix
-c:/program files/IBM/log/Queue_Manager in windows
=> These MQ logs/ Transaction logs holds the following information
– Transaction activity known as Unit Of Work
– Persistant messages
– Internan data about queue manager objects
– Persistant channel status

=> Type of logging
MQ provides 2 type of logging options 1. Circular (default) 2. Linear

Circular Logging
– Good performance
– Esay administration
Linear logging
– Media Recovery
– Ability to archive/backup

=> Configuring Logging
Logging configuration will have direct effect on the performance of MQ. Some of the configuration parameters of logging can be changed after creating Queue Manager but some can not be changed.

Primary Logs are the initial and minimum logs.
Secondary logs can be created when the primary logs become full.

Default minimum Max (unix) Max (Win)
Primary 3 2 510 254
Secondary 2 1 509 253

=> Maximum of primary/secondary logs have a constraint of 511 on unix and 255 on windows platforms. These are the active logs

=> Log file size is a multiple of 4KB log file page size. This can not be changed after creating the Queue Manager.

=> Log file size details table
pages filesize Max
Default Win 256 1MB 256MB
Default unix 1024 4MB 2GB
Minimum Win 32 128KB 32MB
Minimum Win 64 256KB 128MB
Maximum 65535 256MB 64GB(win)/128GB(unix)

=> Log buffer size specifies number of 4KB pages MQ uses to beffer log file writes.
– Default is 128 which is specified by 0 in MQ configuration file
– Minimum is 18 and maximum is 4096

=> Log write integrity is the algorithm used to ensure the integrity of the logs.
– Default algorithm is triple write.
– This can be changed from QM configuration file

=> Logging Configuration files
– mqs.ini is the config file for MQ level settings
– qm.ini is the config file for effective/QM lelvel settings


Log Defaults:


=> Log Management
– Circular logging do not need manual log management
– Linear logging needs manual backup/cleaning

=> Some logging related MQ codes
– AMQ7467 Old log file required for queue manager start
– AMQ7468 old log file required for Queue Manager media recovery

– AMQ5037 Queue manager task ‘LOG-FORMAT’ started
– AMQ5037 Queue Manager task ‘LOGGEREU’ stared
– AMQ5037 Queue Manager task ‘LOGGER-IO’ started

=> Media Recording and Recovery
– Recording media images
rcdmqimg command will record the image of the queue manager specified.
– Recovery
is required when a powerloss/reboot/QM failure occurs
When the recovery is performed
– Queues are restored to their comitted state at the time of failure
– Persistant data is not lost
– non-persistant messages will be discarded

=> Log Recovery scenarios
– Disk Failures
In case of circular logging, restore QM and logs from backup
Rebuild QM using support pac MS03
In case of linear logging, restored damaged objects using rcrmqobj

=> Log recovery summary
– In case of circular logging, no recovery is available
– In case of linear logging, use rcrmqobj to recover/recreate the objects fron media image

MQ Queue Manager Clusters

Clustering is a way to logically group WebSphere MQ queue managers
– reduced system administration due to fewer channel, remote queue, and transmission queue definitions
– increased availability and workload balancing.

=> Basic Cluster setup
> Step-1
Determine which queue manager should hold full repositories
A full repository contains a complete set of information about every queue manager and object in the cluster
You will need at least one, preferably two
> Step-2
Alter the queue manager definitions to add repository definitions
ALTER QMGR REPOS(cluster_name)
> Step-3
Define the CLUSRCVR channels
Every queue manager in the cluster needs a CLUSRCVR with a conname pointing to itself.
DEFINE CHANNEL(channel_name) CHLTYPE(CLUSRCVR) TRTYPE(TCP) CONNAME(‘my_ip_name_or_address(port)’) CLUSTER(cluster_name)
> Step-4
Define the CLUSSDR channels
Define one CLUSSDR to a full repository queue manager. The channel name must match that of the CLUSRCVR on the full repository
DO NOT define a CLUSSDR to point to a partial repository.
DEFINE CHANNEL(channel_name) CHLTYPE(CLUSSDR) TRPTYP(TCP) CONNAME(‘remote_ip_name_or_address(port)’) CLUSTER(cluster_name)
> Step-5
Define a cluster queue
DEFINE QLOCAL(qname) CLUSTER(cluster_name)
Other queue managers in the cluster can send message to it without making remote-queue definitions for it.
Only the local queue manager can read messages from an instance of the cluster queue
You can use a sample program to test putting messages to clustered queues

=> Cluster Commands
AMQ8408: Display Queue Manager details.
QMNAME(QM1) QMID(QM1_2005-07-12_17.14.38) REPOS(QMCLUS)REPOSNL( )
QMID is an internally generated unique name that consists of the queue manager name plus the time the queue manager was created

Display cluster Queue managera details

dis chstatus(*) all
Display channel status details

It displays information about clustered queues only.
A cluster queue will not be displayed on a partial repository until an application has opened it.

This command displays information about queues with TYPE(QCLUSTER)

=> Work load balancing
When a cluster contains more than one instance of the same queue, workload balancing determines the best queue manager to route a message to
– At its simplest, workload management results in a round-robin effect
MQ V6 has additional parameters that can be used to influence the results of the algorithm.
– Queue Managers: CLWLUSEQ, CLWLMRUC
For workload balancing to occur:
– open the queue with the MQOO_BIND_NOT_FIXED open option
– open with the default MQOO_BIND_AS_Q_DEF and with DEFBIND(NOTFIXED) set in the queue definition
– Leave MQMD.ObjectQMgrName blank to allow the queue manager to chose the queue instance
– To force the message to a specific instance of the clustered queue, specify that queue manager’s name in ObjectQmgrName

=> Namelists in clusters
– A queue manager may be a member of more than one cluster. List those clusters in a NAMELIST.
– You can alter a full repository QMGR to use REPOSNL(namelist) rather than REPOS.
– For channels and queues, you can specify CLUSNL(namelist) rather than specifying the CLUSTER parameter.

– REFRESH CLUSTER removes and rebuilds locally held information about a cluster.
– REFRESH CLUSTER(clustername) REPOS(YES), also refreshes information about full repository queue managers and may not be issued from a full repository.

RESET CLUSTER is issued from a full repository queue manager. It forcibly removes a queue manager or specific QMID from a cluster.

=> Troubleshooting
– Is the repository manager still running?
Check the AMQERRxx.log or CHIN joblog.
– Are channels able to run in both directions?
Display CLUSQMGR and CHSTATUS information.
– Are the SYSTEM.CLUSTER.* queues enabled?
– Are there messages on
– Are there duplicate QMIDs for a given QMGR?
– DISPLAY CLUSQMGR may show CLUSQMGR names starting with SYSTEM.TEMP.
The queue manager has not received all necessary information from the full repository.

Websphere MQ Triggering

MQ Trigering
=> What is Triggering?
WebSphere MQ provides a feature that enables an application or channel to be started automatically when there are messages available to retrieve from a queue.

=> How it works?
– A message is put to a queue defined as triggered.
– If a series of conditions are met, the queue manager sends a trigger message to an initiation queue. This is called a trigger event.
– A trigger monitor reads the trigger message and takes the appropriate action based on the contents of the message, which is typically to start a program to process the triggered queue.
– Trigger monitors may be built-in, supplied by a SupportPac, or user written.

=> Types of Trigger (TRIGTYPE)
– FIRST: A trigger event occurs when the current depth of the triggered queue changes from 0 to 1.
Use this type of trigger when the serving program will process all the messages on the queue (i.e. until MQRC_NO_MSG_AVAILABLE).
– EVERY: A trigger event occurs every time a message arrives on the triggered queue.
Use this type of trigger when the serving program will only process one message at a time.
– DEPTH: A trigger event occurs when the number of messages on the triggered queue reaches the value of the TRIGDPTH attribute.
Use this type of trigger when the serving program is designed to process a fixed number of messages (i.e. all replies for a certain request).

=> Trigger interval (TRIGINT)
– TriggerInterval or TRIGINT is a time interval specified on the QMGR definition for use by queues defined as TRIGTYPE=FIRST.
– Situations may occur when messages are left on the queue. New messages will not cause another trigger message. To help with this situation, a trigger message will be created when the next message is put if TriggerInterval has elapsed since the last trigger message was created for the queue.

=> Trigget setup
– Create an initiation queue or use the default SYSTEM.CHANNEL.INITQ.
– Create a process definition (optional). TriggerData may be specified in lieu of a process definition.
– Create or alter a transmission queue.
– Associate the initiation queue and the process definition (if applicable) with the transmission queue, and specify the trigger attributes.
=> Triggering Example1
– QM2 is the name of the XMITQ
– QM1.TO.QM2 is the name of the channel to be started when a message hits the XMITQ
– SYSTEM.CHANNEL.INITQ is the initq monitored by the channel initiator
=> Triggering Example2
– The XMITQ definition has PROCESS instead of TRIGDATA
– The channel name is in USERDATA

=> Application Triggering
– Create an initiation queue or use the default SYSTEM.DEFAULT.INITIATION.QUEUE.
– Create a process definition.
– Create or alter a local or model queue.
– Associate the initiation queue and process definition with the local queue, and specify the trigger attributes .
APPLICID is the name of the application executable file, e.g.
APPLICID (’/u/admin/test/IRMP01?)
– On UNIX® systems, ENVRDATA can be set to the ampersand character to make the started application run in the background.
– The application can receive a parm list with an MQTMC2 containing USRDATA and ENVRDATA.

=> Application Trigger Example1
– Note: If using TRIGTYPE(DEPTH) then TRIGDEPTH must also be specified.
=> Application Trigger Example2
– CRTMQMQ QNAME(initq_name)
– CRTMQMPRC PRCNAME(proc_name) APPID(lib/pgm) ENVDATA (‘JOBNAME(trigapl) JOBD(lib/jobd)’)
– Note: If using TRGTYPE(*DEPTH) then TRGDEPTH must also be specified.

=> Trigger Conditions
– A trigger message is sent to the initiation queue when all of the following conditions are satisfied:
1. A message is put on a transmission or local queue.
2. The message’s priority is greater than or equal to the TriggerMsgPriority of the queue.
3. The number of messages on queue was previously
– Zero for trigger type FIRST
– Any number for trigger type EVERY or *ALL
– TriggerDepth minus 1 for trigger type DEPTH
4. For trigger type FIRST or DEPTH, no program has the trigger queue open for GETs (Open input count=0).
5. The Get enabled attribute is set to YES on the triggered queue.
6. A Process name is specified and exists, or for transmission queues, TriggerData contains the name of a channel.
7. An Initiation queue is specified and exists and GETs and PUTs are enabled for the initiation queue.
8. The trigger monitor has been started and has the initiation queue open for GETs
9. The TriggerControl attribute is set to YES on the triggered queue.
10. The TriggerType attribute is not set to NONE.
11. For Trigger Type FIRST, the queue was not previously empty, but the TriggerInterval set for the QMGR has elapsed.
12. The only application serving the queue issues MQCLOSE and there are still messages on the queue that satisfy Conditions 2 and 6-10.
13. Certain trigger attributes of the triggered queue are changed, such as
– or a trigger monitor opens the Initiation queue.
14. MSGDLVSQ is set correctly relative to the priority of the messages and the TriggerMsgPriority setting.

=> Triggering Problem determination
– If the setup is brand new or any configuration changes have been made, verify all the definitions are complete and correct.
– If the setup is brand new or has worked before, verify all the conditions for a trigger event are satisfied.
– If channel triggering, verify the correct channel name is specified and the channel is not in a STOPPED state.
– If application triggering
– verify the application name is correct and exists
– verify the application is coded correctly
– verify the correct authorizations are in place.
– Verify a trigger message can be delivered.
– Verify a trigger monitor is active.
– Verify trigger type and application design match.
– Try manually starting the triggered application to see if it is able to run.
– Stranded messages may occur when the triggered application fails to remove one or more messages
– for TriggerType FIRST, use TriggerInteval as a “safety net”, because a trigger event only occurs when depth goes from 0 to 1.
– for TriggerType EVERY, if the triggered application only does one MQGET, manual intervention will be required to process the messages. Otherwise, the application will only read the oldest message on the next successful trigger and the queue depth will remain non-zero.
– If there is a loop or high CPU:
– Change the triggered application to specify a WaitInterval on its MQGET.
– Check the BackoutCount in the MQMD

– For trigger EVERY, if the trigger monitor ends prematurely, no matter how many messages reside on the queue only one trigger message will be created on restart.
– RUNMQTRM will not get another trigger message until the application completes. To prevent trigger messages from accumulating
– Run multiple trigger monitors or
– Run the applications in the background
– A trigger message is put to the dead letter queue when
– The queue manager can not put a trigger message on the INITQ
– RUNMQTRM detects an error in the trigger message structure
– RUNMQTRM detects an unsupported application type
– RUNMQTRM can not start the application
– RUNMQTRM detects a data conversion error
– Trigger messages are non-persistent.
– Conditions for a trigger event are “persistent”, so, if a trigger message is lost a trigger message will be created when the conditions are met.
– Trigger messages take on the default priority of the INITQ.
– Trigger monitors can not retrieve messages that are part of a unit of work until the unit of work completes (applies whether it is committed or backed out).
– The queue manager counts both committed and uncommitted messages on the trigger queue when assessing the conditions for a trigger event.
– After recycling the queue manager or trigger monitor, a trigger message may be created if the triggered queue has messages on it and provided the trigger conditions are met.
– When triggering channels, trigger type FIRST or DEPTH is recommended.
– Disabling triggering is not under syncpoint control so triggering can not be re-enabled by backing out a unit of work. If a program backs out a unit of work or abends, triggering for DEPTH must be reenabled by MQSET, ALTER or CHGMQMQ.

MQ Problem determination – Queue Manager Diagnostics

Problem determination – Queue Manager Diagnostics

=> Error Reporting
– when connected to Queue Manager, all the logs routed to /var/mqm/qmgrs/QM_NAME/errors
– all other messages routed to /var/mqm/errors

=>Error message components
– message always include an identifier and basic text
– message contains date/date/pid/user/program/exception message/suggested action/source file which generated it
=> Log Rollover
– current message are always appended to the first error log
– rollover occurs when the log reaches ~256k
– this size setting can be changed by setting the following
-queue manager
ErrorLogSize=1048576 #1Mb error log file
-for System error logs

=> Suppression of messages
– Allows non-critical messages to be suppressed
– can be set using the ini stanza
ExcludeMesssage=9001,9002,9999 #don’t write these
SuppressMessage=9508 #only write once
Suppressinterval=30 #in any 30secs

=> Error log Recomandations
– save all error logs after a serious error occurs as well as the qm.ini
– be sure to note the time you observed the problem
– look for related messages before and after the time of the error
– try to correlate error log message with other diagnostics

– FFST is first failure support technology
– FFST are files written to /var/mqm/errors
– FFST file names format is AMQ[PID].x.FDC
– all threads in a process will append their FFSTs to the same file

=> FFST layout
-The header
this includes date/time/hostname/pids/probeID/builddate/user/program/process/thread/QM/probetype/monitcode etc…
-The Function stack
every thread executing MQ code has a thread control block which contains stack for MQ functions. this stack shows context in which error occured
-The Trace history
the trace shows sequence of events leading up to a failure
-The component dumps
shows the commom services control blocks
includes all user settings at time of error

=> Base MQ tracing
– Tracing records the sequence of events in a program
– MQ supports tracing on all queue managers and clients
– Trace are binary files which requir formatting
– MQ shipps programs for starting/stopiing/formatting traces
– traces are written to /var/mqm/trace
– trace contains a header with extended process information, then each line of trace contains pid/tid/trace data.

Migrating WebSphere MQ queue manager clusters

MQ Migration
Migrating queue managers is generally a simple process, because WebSphere MQ is designed to automatically migrate objects and messages, and support mixed version clusters. However, when planning the migration of a cluster, you need to consider a number of issues, which are described below.

Forward migration involves upgrading an existing queue manager to a later version and is supported on all platforms. You may wish to forward migrate in order to take advantage of new features or because the old version is nearing its end-of-service date.
It is important when making any system changes to test the changes in a test or QA environment before rolling out the changes in production, especially when migrating software from one version to another. Ideally, an identical migration plan would be executed in both test and production to maximise the chance of finding potential problems in test rather than production. In practice, test and production environments are unlikely to be architected or configured identically or to have the same workloads, so it is unlikely that the migration steps carried out in test will exactly match those carried out in production. Whether the plans and environments for test and production differ or not, it is always possible to find problems when migrating the production cluster queue managers

When creating the migration plan, you need to consider general queue manager migration issues, clustering specifics, wider system architecture, and change control policies. Document and test the plan before migrating production queue managers. Here is an example of a basic migration plan for a cluster queue manager:

1. Suspend queue manager from the cluster.
* Monitor traffic to the suspended queue manager. The cluster workload algorithm can choose a suspended queue manager if there are no other valid destinations available or an application has affinity with a particular queue manager.
2. Save a record of all cluster objects known by this queue manager. This data will be used after migration to check that objects have been migrated successfully.
* Issue DISPLAY CLUSQMGR(*) to view cluster queue managers.
* Issue DISPLAY QC(*) to view cluster queues.
3. Save a record of the full repositories view of the cluster objects owned by this queue manager. This data will be used after migration to check that objects have been migrated successfully.
* Issue DISPLAY CLUSQMGR on the full repositories.
* Issue DISPLAY QC(*) WHERE(CLUSQMGR EQ ) on the full repositories.
4. Stop queue manager.
5. Take a backup of the queue manager.
6. Install the new version of WebSphere MQ.
7. Restart queue manager.
8. Ensure that all cluster objects have been migrated successfully.
* Issue DISPLAY CLUSQMGR(*) to view cluster queue managers and check output against the data saved before migration.
* Issue DISPLAY QC(*) to view cluster queues and check output against the data saved before migration.
9. Ensure that the queue manager is communicating with the full repositories correctly. Check that cluster channels to full repositories can start.
10. Ensure that the full repositories still know about the migrated cluster queue manager and its cluster queues.
* Issue DISPLAY CLUSQMGR on the full repositories and check output against the data saved before migration.
* Issue DISPLAY QC(*) WHERE(CLUSQMGR EQ ) on the full repositories and check output against the data saved before migration.
11. Test that applications on other queue managers can put messages to the migrated cluster queue manager’s queues.
12. Test that applications on the migrated queue manager can put messages to the other cluster queue manager’s queues.
13. Resume the queue manager.
14. Closely monitor the queue manager and applications in the cluster for a period of time.

Backout plan
A backout plan should be documented before migrating. It should detail what constitutes a successful migration, the conditions that trigger the backout procedure, and the backout procedure itself. The procedure could involve removing or suspending the queue manager from the cluster, backwards migrating, or keeping the queue manager offline until an external issue is resolved.

Websphere MQ FFST

What is FFST?
FFST stands for First Failure Support Technology, and is technology within WebSphere MQ designed to create detailed reports for IBM Service with information about the current state of a part of a queue manager together with historical data.
What are they for?
They are used to report unexpected events or states encountered by WebSphere MQ. (Alternatively, they can be generated upon request).

Note that return codes are used for application programmers to inform them of expected states or errors in a WebSphere MQ application. There are exceptions to this rule, but as a rule of thumb, FFSTs are used to report something that will need to be actioned by:
• system administrators – such as where FFSTs report resource issues such as running low on disk space
• IBM – where FFSTs report a potential code error in WebSphere MQ that (unless already identified and corrected in existing maintenance) may need correcting
Where are they?
On UNIX systems, they are written to /var/mqm/errors
They are contained in files with the extension .FDC
The file name will begin with AMQ followed by the process id for the process which reported the error. e.g /var/mqm/errors/AMQ12345.0.FDC – is the first FFST file produced by a process with ID 12345

What do they contain?
FFST files are text files containing error reports for a single process.
If a single process produces more than one error report, these are all included within the same FFST file for that process, in the order in which they were generated.
How should I look at these files?
FFST files are just text files, so your favorite text editor is normally the best place to start.
The tool ffstsummary is also useful – it produces a summary of FFST reports in the current directory, sorted into time order. This can be a good place to start to see the errors reported in your errors directory.
For example:
[mqm@test~]$ cd /var/mqm/errors
[mqm@test errors]$ ffstsummary
AMQ21433.0.FDC 2007/04/10 10:05:45 amqzdmaa 21433 2 XC338001 xehAsySignalHandler xecE_W_UNEXPECTED_ASYNC_SIGNAL OK
AMQ21429.0.FDC 2007/04/10 10:05:45 amqzmur0 21429 2 XC338001 xehAsySignalHandler xecE_W_UNEXPECTED_ASYNC_SIGNAL OK
AMQ21469.0.FDC 2007/04/10 10:05:45 runmqlsr 21469 2 XC338001 xehAsySignalHandler xecE_W_UNEXPECTED_ASYNC_SIGNAL OK
AMQ21422.0.FDC 2007/04/10 10:05:45 amqzfuma 21422 2 XC338001 xehAsySignalHandler xecE_W_UNEXPECTED_ASYNC_SIGNAL OK
AMQ21424.0.FDC 2007/04/10 10:05:45 amqzmuc0 21424 2 XC338001 xehAsySignalHandler xecE_W_UNEXPECTED_ASYNC_SIGNAL OK
AMQ21431.0.FDC 2007/04/10 10:05:45 amqrrmfa 21431 2 XC338001 xehAsySignalHandler xecE_W_UNEXPECTED_ASYNC_SIGNAL OK
AMQ21449.0.FDC 2007/04/10 10:05:45 amqzlaa0 21449 2 XC338001 xehAsySignalHandler xecE_W_UNEXPECTED_ASYNC_SIGNAL OK
AMQ21434.0.FDC 2007/04/10 10:05:45 amqzmgr0 21434 2 XC338001 xehAsySignalHandler xecE_W_UNEXPECTED_ASYNC_SIGNAL OK
AMQ21452.0.FDC 2007/04/10 10:05:45 runmqchi 21452 2 XC338001 xehAsySignalHandler xecE_W_UNEXPECTED_ASYNC_SIGNAL OK
AMQ21417.0.FDC 2007/04/10 10:05:45 amqzxma0 21417 4 XC338001 xehAsySignalHandler xecE_W_UNEXPECTED_ASYNC_SIGNAL OK
The columns in the output above show:
• filename – which FDC file contains the FFST report
• time and date of the report
• process name – name of the process which produced the report
• process and thread ids – for the process which produced the report
• probe id
• component – part of WebSphere MQ where the report was produced
• error code – major errorcode and minor code
What does an FFST report contain?
I’ve added some numbers on the left to mark out points worth noting…
Sample FFST Report:
| |
| WebSphere MQ First Failure Symptom Report |
| ========================================= |
| |
(1) | Date/Time :- Wednesday Feb 02 13:25:56 IST 2008 |
(2) | Host Name :- joseph.joseph.com (Linux 2.6.9-42.0.10.EL) |
| PIDS :- 5724H7207 |
(3) | LVLS :- |
| Product Long Name :- WebSphere MQ for Linux (POWER platform) |
| Vendor :- IBM |
(4) | Probe Id :- XC338001 |
| Application Name :- MQM |
(5) | Component :- xehAsySignalHandler |
(6) | SCCS Info :- lib/cs/unix/amqxerrx.c, |
| Line Number :- 737 |
| Build Date :- Sep 21 2007 |
| CMVC level :- p600-200-060921 |
| Build Type :- IKAP – (Production) |
(7) | UserID :- 00011243 (mqm ) |
( | Program Name :- runmqlsr |
| Addressing mode :- 64-bit |
(9) | Process :- 16337 |
| Thread-Process :- 16337 |
(10) | Thread :- 2 |
| ThreadingModel :- PosixThreads |
(11) | Major Errorcode :- xecE_W_UNEXPECTED_ASYNC_SIGNAL |
| Minor Errorcode :- OK |
| Probe Type :- MSGAMQ6209 |
| Probe Severity :- 3 |
(12) | Probe Description :- AMQ6209: An unexpected asynchronous signal (2 : |
| SIGINT) has been received and ignored. |
| FDCSequenceNumber :- 0 |
| Arith1 :- 2 2 |
(13) | Comment1 :- SIGINT |
| Comment2 :- Signal sent by pid 0 |
| |

(14) MQM Function Stack

(15) MQM Trace History
{ xppInitialiseDestructorRegistrations
} xppInitialiseDestructorRegistrations rc=OK
{ xehAsySignalMonitor
-{ xcsGetEnvironmentInteger
–{ xcsGetEnvironmentString
–} xcsGetEnvironmentString rc=xecE_E_ENV_VAR_NOT_FOUND

(16) Process Control Block
0×80006ad890 58494850 000029E8 00003FD1 00000004 XIHP..)…?…..
0×80006ad8a0 00000000 10029F70 00000000 10033A50 …….p……
0×80006ad8b0 00000000 00000000 00000000 00000000 …………….
0×80006ad8c0 to 0×80006ad900 suppressed, 5 lines same as above
0×80006ad910 00000000 00000001 00000000 00000000 …………….
0×80006ad920 00000000 00000000 00000000 00000000 …………….
0×80006ad930 to 0×80006ad9d0 suppressed, 11 lines same as above
0×80006ad9e0 00000000 00000000 00000001 00568001 ………….V..
0×80006ad9f0 00FB8000 00000000 00000080 00760000 ………….v..
0×80006ada00 00000000 00000000 00000000 00000000 …………….
0×80006ada10 to 0×80006ae9f0 suppressed, 255 lines same as above
0×80006aea00 00000000 FFFFFFFF FFFFFFFF 00000000 …………….
0×80006aea10 00000000 00000000 00000001 FFFFFFFE …………….
0×80006aea20 00000001 00000000 00000000 00000000 …………….
0×80006aea30 00000080 0069A380 00000000 00000000 …..i……….
0×80006aea40 00000000 00000000 00000000 00000000 …………….


(17) Environment Variables:
SSH_CLIENT=::ffff: 2625 22
SSH_CONNECTION=::ffff: 2625 ::ffff: 22
LESSOPEN=|/usr/bin/lesspipe.sh %s
1. Date and time that this report was produced
For many problems, this is the most useful piece of information – allowing an error report to be correlated with other known events.
2. hostname for the machine where this report was produced
3. Version and maintenance level for WebSphere MQ
This is useful when comparing an error report against a documented known problem.
4. Probe ID
This is an internal method of identifying the error report. It identifies a single point in the WebSphere MQ source code where the report was produced (consisting of two letters giving a component code, a three digit function code, and a three digit probe identifier).
This often makes it the best way to uniquely identify the error that the report is describing. More on this a bit later…
5. Component
this is the bit of WebSphere MQ which produced the report. As with the source information below, it is generally more useful to us than it is to users, although the name can sometimes give a useful hint as to the nature of the error report. For example, in this case where the report is the result of my using Control-C to generate an interrupt signal, you can see that the component which produced the report was a signal handler.
6. source information
Although this isn’t information isn’t useful to users, I thought it might be interesting to highlight that an FFST will identify exactly where it was produced, down to the source code file, line number and version
7. User id that was running the process which produced the report This is useful to confirm whether a problem was the result of insufficient user privileges.
8. process name of process which produced the report
9. process id for the process which produced the report
10. thread id for the process which produced the report
11. error codes for the report
12. a longer description of the error code for the report
This is a textual (English) description containing information that a WebSphere MQ developer thought might be helpful if the situation were to occur. Sometimes this information may be useful to users, such as messages identifying an operating system function which has failed and what the error code was. Other times, it will only useful to IBM Service.
13. additional comment information
14. function stack for the process at the time of the report
15. a history of function calls made by the process leading up to the report
16. A series of dumps

In the WebSphere MQ source code, functions can register data items that may be of interest. If it has something that could be useful (such as in diagnosing or debugging a problem), it can register it with the engine that produces FFST reports. This means that in the event of an FFST being produced, this data will be included. These items are deregistered when a function completes.
This is normally of more use to IBM Service than users, however there may be times – such as when some message data is included – when you will recognise some of the data here.
17. environment variables for the the environment of the process which produced the report

What can I do if I have an FFST report?
Monitoring for the production of FDC files is an important part of handling the occurrence of errors in a WebSphere MQ system. Prompt handling of a problem can be key to a timely resolution.
If an FDC file is created, the next step is probably to determine if this is something that requires you to take an action, and if so how urgent is it. A number of factors will influence this, including:
• Are queue managers running?
• Are applications still working?
• Does the probe description give any insight into why the FFST was generated?
• Does the time and date of the FFST correspond with any other known events or occurrences at the same time which may explain the error?
If the FFST identifies a resource issue, such as low disk space, then this will normally give enough information for a system administrator to identify and correct the source of the problem.
If you are unable to determine an explanation for the FFST, then a useful next step is to look to see if others have seen this FFST before, and if so what they found it to mean and needed to do.
This is where the probe id from the FFST is very useful. In the majority of cases (for one notable exception, see my discussion on signals below), this will be a unique eye-catcher for the issue being reported. This means that you can search for this short string on the WebSphere MQ support site on ibm.com or in the IBM Support Assistant. Often, this will reveal cases where someone has encountered this FFST before and the fix that resulted.
Beyond this point, you will most likely need to raise a PMR with IBM Service. It is useful to send all FFSTs from your system (rather than just the one that you believe to be of interest), as following the history can be key to resolving an issue. It is also useful to send the WebSphere MQ system (/var/mqm/errors/AMQ*.LOG) and queue manager (/var/mqm/qmgrs/errors/AMQ*.LOG) error logs, together with a clear description of what you are seeing and the impact on the system and your business.
Signal handling
Generally find the probe id to be a unique identifier for a specific problem. While this is usually true, one notable exception is FFSTs produced by the signal and exception handlers.
The signal handler component produces FFSTs to report signals sent to WebSphere MQ processes. This means that the information in the FFST (such as the probe id and source code file, line number, etc.) is about the signal handler which caught the signal, not whatever it was that caused or created the signal.
This is less of a problem if the signal was generated externally to WebSphere MQ, such as the SIGINT that I generated with Ctrl-C in the example above. The FFST contains information about the process which was sent the signal and the time and date of the signal.
It can be more complex if the signal is generated from elsewhere within WebSphere MQ, such as a SIGSEGV from a segmentation fault in another WebSphere MQ process. The exception handler will generate an FFST to record the SIGSEGV, however it is important to bear in mind that any such FFST contains a report about where the SIGSEGV was caught, not where it was generated. This doesn’t mean that the cause cannot be found, but it does mean that the FFST information such as the probe id is not necessarily the sort of unique eye-catcher described above.
Generating FFSTs on request
I mentioned above that it is possible to generate FFSTs manually. This can be done using the following commands:
amqldbgn -p PID (on Windows)
kill -USR2 PID (on UNIX platforms)
where PID is the process ID for a WebSphere MQ process. FFST reports generated in this way will have a probe id that ends in 255.

Q’s operations
– Remote Queue Operations
New Remore Queue
DESCR(’Description’) +
MAXDEPTH(200000) +
TRIGTYPE (FIRST) -> Trigger type (if present)
MAXDEPTH(20000) -> Max depth
MSGDLVSQ(PRIORITY) -> Message delivery meth
PUT -> Put (enable/disable)
XMITQ -> Transmit Queue name
RNAME -> Remore local queue name
RQNAME -> Remote Queue Manager name
QDEPTHHI -> High Queue depth
QDEPTHLO -> Low Queue Depth
REPLACE -> Replace if existing

Change the Queue properties
alter qremote RQ_NAME [property]

Display Queue properties
display qremote RQ_NAME

Give permissions to Queue
setmqaut -m RQ_NAME -n QM_NAME -t queue -g group/user [+browse +get +dsp +put]

Display existing permissions
dspmqaut -m LQ_NAME -n QM_NAME -t queue -g group/user

Websphere MQ – Transmit Queue Operations

New Transmit Queue
DESCR (’Description’) +
MAXDEPTH(200000) +
TRIGTYPE (FIRST) -> Trigger type (if present)
MAXDEPTH(20000) -> Max depth
MSGDLVSQ(PRIORITY) -> Message delivery meth
USAGE(XMITQ) -> Queue usage (XMITQ)
QDEPTHHI (80) -> High Queue depth
QDEPTHLO (20) -> Low queue depth
REPLACE -> Replace if existing

Change the Queue properties
alter qlocal XQ_NAME [property]

Display Queue properties
display qlocal LQ_NAME

Give permissions to Queue
setmqaut -m XQ_NAME -n QM_NAME -t queue -g group/user [+browse +get +dsp +put]

Display existing permissions
dspmqaut -m LQ_NAME -n QM_NAME -t queue -g group/user

Websphere MQ – Local Queue Operations

New Local Queue
DESCR (’description’) +
MAXDEPTH(200000) +

DEFPSIST(NO) -> Persistant YES/NO
TRIGTYPE (FIRST) -> Trigger type
MAXDEPTH(20000) -> Max depth
MSGDLVSQ(PRIORITY) -> Message delivery meth
QDEPTHHI (80) -> High Queue depth
QDEPTHLO (20) -> Low queue depth
REPLACE -> Replace if existing

Change the Queue properties
alter qlocal LQ_NAME [property]

Display Queue properties
display qlocal LQ_NAME

Give permissions to Queue
setmqaut -m LQ_NAME -n QM_NAME -t queue -g group/user [+browse +get +dsp +put]

Display existing permissions
dspmqaut -m LQ_NAME -n QM_NAME -t queue -g group/user

Websphere MQ – Queue Manager Operations

Create new Queue manager
crtmqm QM_NAME

Display all Queue manages

starting Queue Manager
srtmqm QM_NAME

Stopping Queue Manager
endmqm QM_NAME

Connect to Queue Manager
runmqsc QM_NAME

Delete Queue manager
dltmqm Queue_Manager

Websphere MQ – Listener Operations

New Listner
def listener(LISTENER_NAME) trptype(tcp) port(PORT_NUMBER)

Start listener
start listener (LISTENER_NAME)

Stop Listener
stop listener (LISTENER_NAME)

Listener Status
display lsstatus (LISTENER_NAME)

Websphere MQ – Receiver Channel Operations

New Receiver Channel

Starting Channel
start channel (CHANNEL_NAME)

Stopping Channek
stop channel (CHANNEL_NAME)

Status of channel
display chs (CHANNEL_NAME)

Display channel properties
display channel (CHANNEL_NAME)

Leave a comment

Your email address will not be published. Required fields are marked *

Blue Captcha Image


Protected by WP Anti Spam

Hit Counter provided by dental implants orange county