26. Operation and Deployment¶
This chapter contains guidelines and best-practices to help plan and prepare an environment to meet the demands that AMPS is expected to manage.
Capacity Planning¶
Sizing an AMPS deployment can be a complicated process that includes many factors including configuration parameters used for AMPS, the data used within the deployment, and how the deployment will be used. This section presents guidelines that you can use in sizing your host environment for an AMPS deployment given what needs to be considered along every dimension: Memory, Storage, CPU, and Network.
Memory¶
Beyond storing its own binary images in system memory, AMPS also tries to store its SOW and indexing state in memory to maximize the performance of record updates and SOW queries.
AMPS needs less than 1GB for its own binary image and initial start up state for most configurations. In the worst-case, because of indexing for queries, AMPS may need up to twice the size of messages stored in the SOW. AMPS maintains a copy of the latest journal file in memory for quick access, and maintains a small amount of metadata for each message in an AMPS queue. The MessageMemoryLimit configured for the instance (or the total of all MessageMemoryLimit settings for each Transport in the instance) specifies the total amount of memory devoted to buffering messages for clients, including conflated subscriptions, aggregated subscriptions, and paginated subscriptions.
This puts a general memory consumption estimate for AMPS itself at:
1GB + ( 2S * M ) + ( C * 4096 bytes) + ( TMemLimit ) + (J * 2) + (Q * 196 bytes)
Example 26.1: Memory Estimation equation
where:
S = Average message size for topics in the SOW (including views and conflated topics)
M = Number of messages in all topics in the SOW (including views and conflated topics)
C = Number of Clients
TMemLimit = Total of all MessageMemoryLimit settings in the instance
J = JournalSize setting
Q = Total number of messages in the queues for the instance
1GB + ( 2 * 1024 * 20,000,000 ) + ( 200 * 4096) + ( 10,000,000,000) +
( 1,000,000,000 * 2 ) + ( 1,000,000 * 196)
Example 26.2: Example memory estimation equation
where:
S = 1024
M = 20,000,000
C = 200
TMemLimit = 10,000,000,000 (10GB)
J = 1,000,000,000 (1GB)
Q = 1,000,000
An AMPS deployment expected to hold 20 million messages with an average message size of 1KB, that has a total MaxMemoryLimit of 10GB, uses a JournalSize of 1GB, and expects to have a total of 1,000,000 messages active in message queues at any given time and 200 connected clients would consume roughly 53GB.
60East recommends leaving headroom on the system above the capacity estimate for operating system tasks and unexpected traffic. For critical systems that cannot afford downtime even when unexpected events occur, a good recommendation is to allocate 200% of the standard capacity for AMPS, or enough capacity to handle the largest volume increase historically observed for the system. (For example, if the largest volume spike observed was 350% of the typical volume at that time, then planning to be able to handle at least 350% of the typical volume would be important for a critical system).
Storage¶
AMPS needs enough space to store its own binary images, configuration files, SOW persistence files, log files, transaction log journals, and slow client offline storage, if any. Not every deployment configures a SOW or transaction log, so the storage requirements are largely driven by the configuration.
AMPS log files¶
Log file sizes vary depending on the log level and how the engine is
used. For example, in the worst-case, trace
logging, AMPS will need
at least enough storage for every message published into AMPS and every
message sent out of AMPS plus 20%.
For info
level logging, a good estimate of AMPS log file sizes would
be 2MB per 10 million messages published.
Logging space overhead can be capped by implementing a log rotation strategy which uses the same file name for each rotation. This strategy effectively truncates the file when it reaches the log rotation threshold to prevent it from growing larger.
SOW¶
When calculating the SOW storage, there are a couple of factors to keep
in mind. The first is the average size of messages stored in the SOW,
the number of messages stored in the SOW and the SlabSize
defined in
the configuration file. Using these values, it is possible to estimate
the minimum and maximum storage requirements for the SOW:
Min = ( MsgSize * MsgCount ) + ( Cores * SlabSize )
Example 26.3: Minimum SOW Size
where
Min = Minimum SOW Size
MsgSize = Average SOW Message Size
MsgCount = Number of Sow Messages
SlabSize = Slab Size for the SOW
Cores = Number of processor cores in the system
Max = ( ( MsgCount / ( ( SlabSize / MsgSize) / 2) ) * SlabSize) + (Cores * SlabSize)
Example 26.4: Maximum SOW Size
where
Max = Maximum SOW Size
MsgCount = maximum number of messages
SlabSize = Slab size for the SOW
MsgSize = Average SOW Message Size
SlabSize = Slab size for the SOW
MsgCount = Number of SOW Messages
Cores = Number of CPU cores in the system
The storage requirements should be between the two values above, however
it is still possible for the SOW to consume additional storage based on
the unused capacity configured for each SOW topic. Further, notice that
AMPS reserves the configured SlabSize
for each processor core in the
system the first time a thread running on that core writes to the SOW.
For example, in an AMPS configuration file with the SlabSize
set to
1MB, the SOW for this topic will consume 1MB per processor core with no
messages stored in the SOW. Pre-allocating SOW capacity in chunks, as a
chunk is needed, is more efficient for the operating system, storage
devices, and helps amortize the SOW extension costs over more messages.
It is also important to be aware of the maximum message size that AMPS guarantees the SOW can hold. The maximum message size is calculated in the following manner:
Max = SlabSize - 64 bytes
Example 26.5: Maximum Message Size allowed in SOW
where
Max = Maximum SOW Size
SlabSize = The configured SlabSize for the SOW
This calculation says that the maximum message size that can be stored
in the SOW in a single message storage is the SlabSize
minus 64
bytes for the record header information. AMPS enforces a lower limit of
approximately 1MB: if the maximum size works out to less than 1MB, AMPS
will use 1MB as the maximum size for the topic.
Transaction Logs¶
Transaction logs are used for message replay, replication, and to ensure consistency in environments where each message is critical. Transaction logs are optional in AMPS, and transaction logs can be configured for individual topics or filters. When planning for transaction logs, there are three main considerations:
- The total size needed for the transaction log, including in disaster recovery scenarios
- The size to allow for each file that makes up the transaction log
- How many files to preallocate
You can calculate the approximate total size of the transaction log as follows:
Capacity = ( S + 512 bytes ) * N
Example 26.6: Transaction Log Sizing Approximation
where
Capacity = Estimated storage capacity required for transaction log
S = Average message size
N = Number of messages to retain
Size your files to match the aging policy for the transaction log data. To remove data from the transaction log, you simply archive or delete files that are no longer needed. You can size your files to make this easier. For example, if your application typically generates 100GB a day of transaction log, you could size your files in 10GB units to make it easier to remove 100GB increments.
AMPS allows you to preallocate files for the transaction log. For applications that are very latency-sensitive, preallocation can help provide consistent latency. We recommend that those applications preallocate files, if storage capacity and retention policy permit. For example, an application that sees heavy throughput during a working day might preallocate enough files so that there is no need for additional allocation within the working day.
Notice that, if your application uses replication, the AMPS transaction log maintenance actions will not delete unreplicated messages that were initially published to this instance. This means that, when calculating the maximum storage space required, the recovery window for a failure is also important. For example, many systems have a policy of not restarting a failed system until a scheduled maintenance window: if one server in a replicated set of servers could, potentially, be offline for up to 8 hours, then the other servers must be able to store a minimum of 8 hours of journals, even in cases where the normal retention period would be shorter.
Other Storage Considerations¶
The previous sections discuss the scope of sizing the storage, however scenarios exist where the performance of the storage devices must also be taken into consideration.
One such scenario is the following use case in which the AMPS transaction log is expected to be heavily used. If performance greater than 50MB/second is required out of the AMPS transaction log, experience has demonstrated that flash storage (or better) would be recommended. Magnetic hard disks lack the performance to produce results greater than this with a consistent latency profile.
CPU¶
SOW queries with content filtering make heavy use of CPU-based operations and, as such, CPU performance directly impacts the content filtering performance and rates at which AMPS processes messages. The number of cores within a CPU largely determines how quickly SOW queries execute.
AMPS contains optimizations which are only enabled on recent 64-bit x86 CPUs. To achieve the highest level performance, consider deploying on a CPU which includes support for the SSE 4.2 instruction set.
To give an idea of AMPS performance, repeated testing has demonstrated that a moderate query filter with 5 predicates can be executed against 1KB messages at more than 1,000,000 messages per second, per core on an Intel i7 3GHz CPU. This applies to both subscription based content filtering and SOW queries. Actual messaging rates will vary based on matching ratios and network utilization.
Network¶
When capacity planning a network for AMPS, the requirements are largely dependent on the following factors:
- average message size
- the rate at which publishers will publish messages to AMPS
- the number of publishers and the number of subscribers.
AMPS requires sufficient network capacity to service inbound publishing as well as outbound messaging requirements. In most deployments, outbound messaging to subscribers and query clients has the highest bandwidth requirements due to the increased likeliness for a “one to many” relationship of a single published message matching subscriptions/queries for many clients.
Estimating network capacity requires knowledge about several factors, including but not limited to: the average message size published to the AMPS instance, the number of messages published per second, the average expected match ratio per subscription, the number of subscriptions, and the background query load. Once these key metrics are known, then the necessary network capacity can be calculated:
R * Sz ( 1 + M * Sb ) + Q
Example 26.7: Network capacity formula
where
R = Rate
Sz = Average Message Size
M = Match Ratio
Sb = Number of Subscribers
Q = Query Load
where “Query Load” is defined as:
Mq * S * Qs
where
Mq = Messages per Query
S = Average Message Size
Qs = Queries Per Second
In a deployment required to process published messages at a rate of 5000 messages per second, with each message having an average message size of 600 bytes, the expected match rate per subscription is 2% (or 0.02) with 100 subscriptions. The deployment is also expected to process 5 queries per 1 minute ( or 12 queries per second), with each query expected to return 1000 messages.
5000 * 600 B * ( 1 + 0.02 * 100 ) + ( 1000 * 600 B * 1/12 ) ~ 9 MB / s ~ 72 Mb / s
Based on these requirements, this deployment would need at least 72Mb/s of network capacity to achieve the desired goals. This analysis demonstrates AMPS by itself would fall into a 100Mb/s class network. It is important to note, this analysis does not examine any other network based activity which may exist on the host, and as such a larger capacity networking infrastructure than 100Mb/s would likely be required.
Replication Network Bandwidth¶
For replication connections, the general recommendation is to estimate bandwidth needs as though each outgoing replication destination is a subscriber that subscribes to all of the replicated topics, and each incoming destination is a publisher that fully publishes the replicated topics. Although AMPS replication connections support compression, the general recommendation is to provision enough network capacity to support the full replication stream, and then to use compression to save capacity.
NUMA Considerations¶
AMPS is designed to take advantage of non-uniform memory access (NUMA). For the lowest latency in networking, we recommend that you install your NIC in the slot closest to NUMA node 0. AMPS runs critical threads on node 0, so positioning the NIC closest to that node provides the shortest path from processor to NIC.
When a single instance of AMPS is deployed on the system, as is the case with most critical production systems, 60East recommends leaving AMPS NUMA tuning enabled (this is the default).
If more than one instance of AMPS is running on the same system, 60East recommends disabling AMPS NUMA tuning in the AMPS configuration file and relying on the operating system NUMA management.
When AMPS is deployed in a virtual machine, 60East recommends disabling AMPS-level NUMA tuning in the AMPS configuration file.
Linux Operating System Configuration¶
This section covers some settings which are specific to running AMPS on a Linux Operating System.
ulimit¶
The ulimit
command is used by a Linux administrator to get and set
user limits on various system resources.
ulimit -c
It is common for an AMPS instance to be configured to consume gigabytes of memory for large SOW caches. If a failure were to occur in a large deployment it could take seconds (maybe even hours, depending on storage performance and process size!) to dump the core file. AMPS has a minidump reporting mechanism built in that collects information important to debugging an instance before exiting. This minidump is much faster than dumping a core file to disk. For this reason, it is recommended that the per user core file size limit is set to 0 to prevent a large process image from being dumped to storage.
ulimit -n
The number of file descriptors allowed for a user running AMPS needs to be at least double the sum of counts for the following: connected clients, SOW topics and pre-allocated journal files. Minimum: 4096. Recommended: 32768, or the value recommended by AMPS in any diagnostic messages, whichever is greater
Transparent Huge Pages¶
Transparent huge pages can add a large amount of overhead to memory management.
For best performance, 60East recommends disabling transparent huge pages
(by setting the value to never
) or require applications to explicitly
request transparent huge pages (by setting the value to madvise
).
Unless the system runs other applications that are known to explicitly request transparent huge pages and benefit from having them available, 60East generally recommends disabling transparent huge pages.
echo never > /sys/kernel/mm/transparent_hugepage/enabled
This will change the value until the operating system is rebooted. To make a
permanent change to this setting, add this command to the startup scripts,
or add the transparent_hugepage=never
option to the kernel startup
flags (see the documentation for your Linux distribution for details).
Recommended: never
/proc/sys/fs/aio-max-nr¶
Each AMPS instance requires AIO in the kernel to support at least 16384
plus 8192 for each SOW topic in simultaneous I/O operations. The setting
aio-max-nr
is global to the host and impacts all applications. As
such this value needs to be set high enough to service all applications
using AIO on the host. Minimum: 65536. Recommended: 1048576
To view the value of this setting, as root you can enter the following command:
cat /proc/sys/fs/aio-max-nr
To edit this value, as root you can enter the following command:
sysctl -w fs.aio-max-nr = 1048576
This command will update the value for /proc/sys/fs/aio-max-nr
and
allow 1,048,576 simultaneous I/O operations, but will only do so until
the next time the machine is rebooted. To make a permanent change to
this setting, as a root user, edit the /etc/sysctl.conf
file and
either edit or append the following setting:
fs.aio-max-nr = 1048576
/proc/sys/fs/file-max¶
Each AMPS instance needs file descriptors to service connections and maintain file handles for open files. This number needs to be at least double the sum of counts for the following: connected clients, SOW topics and pre-allocated journal files. This file-max setting is global to the host and impacts all applications, so this needs to be set high enough to service all applications on the host. Minimum: 262144 Recommended: 6815744
To view the value of this setting, as root you can enter the following command:
cat /proc/sys/fs/file-max
To edit this value, as root you can enter the following command:
sysctl -w fs.file-max = 6815744
This command will update the value for /proc/sys/fs/file-max
and
allow 6,815,744 concurrent files to be opened, but will only do so until
the next time the machine is rebooted. To make a permanent change to
this setting, as a root user, edit the /etc/sysctl.conf
file and
either edit or append the following setting:
fs.file-max = 6815744
/proc/sys/vm/min_free_kbytes¶
This paramter sets the minimum amount of memory to keep free in the system. Setting this value properly can help the operating system function more effectively in low-memory situations. If this value is set too low, the operating system can have difficulty reclaiming memory, which can lead to unnecessary out-of-memory events. If this value is set too high, overall system efficiency decreases as the operating system can spend more time than necessary reclaiming memory.
60East recommends setting this parameter to 1% of the physical memory on the system, rounding up to the nearest GB.
Notice that the units of this parameter are in kilobytes. For example, to set this value for a system that has 128GB of memory, you would calculate 1% of the physical memory (1.28 GB), round up to the nearest GB (2 GB) and then allocate 2000000 KB as the min_free_kbytes.
Minimum: 1000000 (1GB) Recommended: 1% of physical memory, rounded up to the nearest GB
To edit this value, as root you can enter the following command:
sysctl -w vm.min_free_kbytes=2000000
Notice that this tuning recommendation is designed for a server-class machine with a reasonable amount of memory. For a small development machine or blade (for example, a system with less than 32GB of memory), leaving this parameter at the operating system default may be more appropriate.
/proc/sys/vm/max_map_count¶
AMPS makes extensive use of memory mapped files, and frequently modifies the
maps. The /proc/sys/vm/max_map_count
parameter sets the maximum number of
maps that the Linux kernel will allow for a process. If the number of
requested maps exceeds the number of maps in this parameter, memory allocation
operations can fail even when there is sufficient memory available.
This setting is global to the host and applies to all applications, so this needs to be set high enough for the most map-intensive application on the host.
Minimum: 65530 Recommended: 500000
To edit this value, as root you can enter the following command:
sysctl -w vm.max_map_count=500000
This command will update the value for /proc/sys/vm/max_map_count
and
allow 500,000 maps to be created, but will only do so until
the next time the machine is rebooted. To make a permanent change to
this setting, as a root user, edit the /etc/sysctl.conf
file and
either edit or append the following setting:
vm.max_map_count=500000
/proc/sys/vm/swappiness¶
AMPS performs best when the data that it needs to retain is in memory. If the operating system needs to use swap because this system requires more memory than is available, performance degrades substantially.
60East recommends that, for systems that host performance-critical instances
of AMPS, the vm.swappiness
setting is set to 1
. This will minimize
swapping on this system, which will improve performance with the tradeoff
of making it more likely for processes to be killed by the operating system
in low-memory situations.
This setting is global to the host and applies to all applications.
Recommended: 1
To edit this value, as root you can enter the following command:
sysctl -w vm.swappiness=1
This command will update the value for /proc/sys/vm/swappiness
and
direct the operating system to avoid using swap space until the system
is under severe memory pressure.
Using the command above will change the swappiness setting until the
operating system is rebooted.To make a permanent change to
this setting, as a root user, edit the /etc/sysctl.conf
file and
either edit or append the following setting:
vm.swappiness=1
/proc/sys/net/ipv4/tcp_frto¶
This option controls whether Forward RTO-Recovery (FRTO) is enabled for the TCP network. Enabling FRTO can be beneficial for overall network performance if a system is sending packets over wireless networks with substantial interference (for example, public WiFi in an urban area). However, this recovery algorithm can reduce performance in wired networks. While this option is enabled on most current Linux distributions by default, disabling the option can improve network performance.
60East recommends disabling this option unless the server is directly delivering traffic over a congested WiFi network.
This setting is global to the host and applies to all applications.
Recommended: 0
To edit this value, as root you can enter the following command:
sysctl -w net.ipv4.tcp_frto=0
This command will update the value for /proc/sys/net/ipv4/tcp_frto
and
direct the operating system to disable FRTO.
Using the command above will change the setting until the
operating system is rebooted. To make a permanent change to
this setting, as a root user, edit the /etc/sysctl.conf
file and
either edit or append the following setting:
net.ipv4.tcp_frto=0
Upgrading an AMPS Installation¶
This chapter describes how to upgrade an existing installation of AMPS. The steps presented here focus on upgrading the installation itself, and should be the only steps you need for upgrades that change the HOTFIX version number or the MAINTENANCE version number (as described in Table 26.2).
For changes that update the MAJOR or MINOR version number, AMPS may add features, change file or network formats, or change behavior. For these upgrades, you may need to make changes to the AMPS configuration file or update applications to adapt to new features or changes in behavior.
60East recommends maintaining a test environment that you can use to test upgrades, particularly when an upgrade changes MAJOR or MINOR versions and you are taking advantage of new features or changed behavior.
When the AMPS instance participates in replication, you must coordinate the instance upgrades when upgrading across AMPS versions.
In this release, AMPS supports replication to and from versions 5.2.0.0 and later for the purposes of rolling upgrade. For long-term deployment, 60East recommends that all AMPS instances that replicate to each other have the same MAJOR and MINOR version number, and preferably run the same release of AMPS.
Upgrade Steps¶
Upgrading an AMPS installation involves the following steps:
- Stop the running instance
- Install the new AMPS binaries
- If you are upgrading from an AMPS version prior to 5.0.0.0, upgrade any data files or configuration files that you want to retain
- If necessary, update the configuration file for the instance
- If necessary, update any applications that will use new features
- Restart the service
AMPS supports replication from version 5.2.0.0 and later to this version of AMPS for the purposes of rolling upgrade with no (or minimal) downtime. 60East recommends that production installations of AMPS have the same MAJOR and MINOR version number at a minimum, and preferably run identical versions of AMPS.
Upgrading AMPS Data Files¶
AMPS may change the format and content of data files when upgrading across versions, as specified by the major and minor version number. This most commonly occurs when new features are added to AMPS that require different or additional information in the persisted files. The HISTORY file for the AMPS release lists when changes have been made that require data file changes.
For this version of AMPS, you must upgrade data files if the previous AMPS version is earlier than 5.0. For versions of AMPS 5.0 and later, there have been no changes to the data file format.
The AMPS distribution includes the amps_upgrade
utility to process
and upgrade data files. Unless you are upgrading from a version of
AMPS prior to 5.0, there is no need to use this utility when
upgrading AMPS.
Best Practices¶
This section covers a selection of best practices for deploying AMPS.
Monitoring¶
AMPS exposes the statistics available for monitoring via a RESTful interface, known as the Monitoring Interface, which is configured as the administration port. This interface allows developers and administrators to easily inspect various aspects of AMPS performance and resource consumption using standard monitoring tools.
At times AMPS will emit log messages notifying that a thread has encountered a deadlock or stressful operation. These messages will repeat with the word “stuck” in them. AMPS will attempt to resolve these issues, however after 60 seconds of a single thread being stuck, AMPS will automatically emit a minidump to the previously configured minidump directory. This minidump can be used by 60East support to assist in troubleshooting the location of the stuck thread or the stressful process.
Another area to examine when monitoring AMPS is the last_active
monitor for the processors. This can be found in the
/amps/instance/processors/all/last_active
url in the monitoring
interface. If the last_active
value continually increases for more
than one minute and there is a noticeable decline in the quality of
service, then it may be best to fail-over and restart the AMPS instance.
Stopping AMPS¶
To stop AMPS, ensure that AMPS runs the amps-action-do-shutdown
action. By default, this action is run when AMPS receives SIGHUP
,
SIGINT
, or SIGTERM
. However, you can also configure an Action to
shut down AMPS in response to other conditions. For example, if your
company policy is to reboot servers every Saturday night, and AMPS is
not running as a system service (or daemon), you could schedule an AMPS
shutdown every Saturday before the system reboot.
When AMPS is installed to run as a system service (or daemon), AMPS installs shutdown scripts that will cleanly stop AMPS during a system shutdown or reboot.
SOW Parameters¶
Choosing the ideal SlabSize
for your SOW topic is a balance between
the frequency of SOW expansion and storage space efficiency. A large
SlabSize
will preallocate space for records when AMPS begins writing
to the SOW.
If detailed tuning is not necessary, 60East recommends leaving the
SlabSize
at the default size if your messages are smaller than the
default SlabSize
. If your messages are larger than the default
SlabSize, a good starting point for the SlabSize
is to set it to
several times the maximum message size you expect to store in the SOW.
There are three considerations when setting the optimium SlabSize
:
- Frequency of allocations
- Overall size of the SOW
- Efficient use of space
A SlabSize
that is small results in frequent extensions of your SOW
topic to occur. These frequent extensions can reduce throughput in a
heavily loaded system, and in extreme cases can exhaust the kernel limit
on the number of regions that a process can map. Increasing the
SlabSize
will reduce the number of allocations.
When the SlabSize
is large, then the risk of the SOW resize
affecting performance is reduced. Since each slab is larger, however,
there will be more space consumed if you are only storing a small number
of messages: this cost will amortize as the number of messages in the
SOW exceeds the number of cores in the system * the number of
messages that fit into a slab.
To most efficiently use space, set a SlabSize
that minimizes the
amount of unused space in a slab. For example, if your message sizes are
average 512 bytes but can reach a maximum of 1.2 MB, one approach would
be to set a SlabSize
of 2.5MB to hold approximately 5 average-sized
messages and two of the larger-sized messages. Looking at the actual
distribution of message sizes in the SOW (which can be done with the
amps_sow_dump
utility) can help you determine how best to size slabs
for maximum space efficiency.
For optimizing the SlabSize
, determine how important each aspect of
SOW tuning is for your application, and adjust the configuration to
balance allocation frequency, overall SOW size, and space to meet the
needs of your application.
Slow Clients¶
As described in Slow Client Management, AMPS provides capacity limits for slow clients to reduce the memory resources consumed by slow clients. This section discusses tuning slow client handling to achieve your availability goals.
Slow Client Offlining for Large Result Sets¶
The default settings for AMPS work well in a wide variety of applications with minimal tuning.
If you have particularly large SOW topics, and your application is disconnecting clients due to exceeding the offlining threshold when the clients retrieving large SOW query result sets, 60East recommends the following settings as a baseline for further tuning:
Parameter | Recommendation |
---|---|
MessageMemoryLimit |
This controls the maximum memory consumed by AMPS for client messages. You can increase this parameter to allow AMPS to use more memory to records. Notice, however, that memory devoted to client messages is unavailable for other purposes. Recommended starting point for
tuning large result sets: 10%
of the system memory (for example,
on a server with 128GB of memory,
start with a 13GB limit).
60East recommends tuning the
|
MessageDiskLimit |
The maximum amount of space to consume for offline messages. Recommended starting point for
tuning large result sets: Average
record size * number of expected
records * number of simultaneous
clients, or |
MessageDiskPath |
The path in which to store offline message files. 60East recommends that the message
disk path be hosted on fast,
high-capacity storage such as a
PCIe-attached flash drive. The
available storage capacity of the
disk must be greater than the
configured |
Table 26.1: Client Offline Settings for Large Result Sets
60East recommends that you use these settings as a baseline for further tuning, bearing in mind the needs and expected messaging patterns of your application.
Minidump¶
AMPS includes the ability to generate a minidump file which can be used in support scenarios to attempt to troubleshoot a problematic instance.
The minidump captures thread state information: a snapshot of where in the source code each thread is, the call stack for each thread, and the register information for each frame of the call stack. A minidump also contains basic information about the system that AMPS was running on, such as the processor type and number of sockets. Minidumps do not contain the full internal state of AMPS or the full contents of application memory. They do not contain detailed information about the host system, and have no information about the state of the host or operating system. Instead, minidumps identify the point of failure to help 60East quickly narrow down the issue without generating large files or potentially compromising sensitive data.
Minidumps can be produced much faster than a standard core dump, and use significantly less space since the minidump contains only a small subset of the information a core dump would contain (see Ulimit for more configuration options). Because of this, the AMPS server may produce minidumps for temporary conditions that the server subsequently recovers from, and allows creation of a minidump on demand.
Generation of a minidump file occurs in the following ways:
- When AMPS detects a crash internally, a minidump file will automatically be generated. This includes cases where an AMPS thread or critical internal component has not reported progress for an extended period of time (typically 300 seconds).
- When a user clicks on the
minidump
link in theamps/instance/administrator
link from the administrator console (see the AMPS Monitoring Reference for more information). - By sending the running AMPS process the
SIGQUIT
signal. - In response to a configured action.
- If a thread fails to report progress with the AMPS thread monitor for 60 seconds, a minidump will automatically be generated. This should be sent to AMPS support for evaluation along with a description of the operations taking place at the time.
By default the minidump is configured to write to /tmp
, but this can be
changed in the AMPS configuration by modifying the MiniDumpDirectory
.
60East recommends monitoring the minidump directory.
If minidumps occur, contact 60East support for diagnosis and troubleshooting. Bear in mind that minidumps are often a symptom of a slowdown in the server due to resource constraints rather than an indication that the server has exited.
Once a minidump is submitted to 60East (and acknowledged as received), there is no further need to retain that minidump. 60East recommends removing minidumps when they are no longer needed.