Dispatcher And Worker Nodes In Dmg
A palette of Process nodes are available for use in the Workflow Builder. Base nodes are standard in Dispatcher Phoenix. Optional nodes are available via the Add In Manager. 888 888 dmg mythic knight cheats. Supported Files for Imaging Processes. Imaging processes such as annotate, deskew, despeckle, merge, split, and watermark work on a select number of file types.
- Dispatcher And Worker Nodes In Dmg 2017
- Dispatcher And Worker Nodes In Dmg Download
- Dispatcher And Worker Nodes In Dmg 1
Richard Treu, Henning Sackewitz
Dispatcher And Worker Nodes In Dmg 2017
This blog post describes our experiences and findings while doing a proof of concept (PoC) in the SAP LinuxLab where we containerized the SAP ABAP application server components and deployed them on various Kubernetes environments. It will also point out potential benefits and challenges.
Please note that this document is neither a complete solution, nor does it provide any current product or development status. The current support status of ABAP application servers running in a container(-orchestration) is documented in SAP note 1122387.
Feel free to comment and share this blog post.
Every ABAP system consists of three tiers: the database containing data and programs, the application server and the clients. For this PoC the scope was on the ABAP application server.
The SAP NetWeaver Application Server ABAP can be broken up into multiple components and thus are subjects for containerization. The first natural choice for containerizing would be the Application Server
instances (AS) because they are the most stateless, cattle-like part of the stack, and they can be scaled relatively easily. However, we also opted for deploying the mandatory components of the ABAP Central Services
, namely the Message Server
and the Enqueue Server
. Finally we added the optional SAP Web Dispatcher
and SAProuter
to the setup.
The underlying SAP HANA database was out of scope for our PoC – it was regarded as a given external resource that can be connected to via configurable secure store credentials.
Our effort was to put all the above components into separate container images, map them to appropriate Kubernetes objects and tie them together in a way that we can use Kubernetes features in the best way.
Our goal was to create a generic ABAP Kubernetes deployment that can integrate into any Kubernetes environment, regardless if it is an on-premise, self-managed Kubernetes-based product (e.g. CaasP, OpenShift) or a Kubernetes as a service offering in the public cloud (e.g. GKE).
Docker Images and Kubernetes Deployment Files
In Kubernetes, applications are distributed via pre-built container images along with Kubernetes YAML deployment files.
Our goal was to build generic ABAP images that can be customized with environment-specific input parameters which are configurable via Kubernetes YAML files. At deployment time they are injected into the Kubernetes environment. For example, the HANA database connection parameters will be injected as Kubernetes secrets.
Some attributes are static, immutable values and not configurable in this PoC:
- SAP SID
- SAP instance number
- SAP admin user
Also, we selected the most current, backward-compatible SAP kernel that will work with most NetWeaver and S/4HANA releases.
Deployment in Kubernetes
Application Server (AS)
The actual workload in an ABAP system is performed on the Application Server
in a server-side session. This is where most memory and processing power will be consumed aside from the database, so this is the most important entity to be scaled with Kubernetes according to workload demand.
It is very easy to scale up application server instances (Pods
) as workload grows, but the scale-down can lead to broken user sessions on the system if Pods
are just arbitrarily destroyed in a cattle-like manner.
We placed the application server in a Deployment
with one initial replica. A Deployment
can be scaled-down in a user-controlled order, as opposed to a StatefulSet
where only a reverse-ordered Pod
scale-down is possible, regardless of actual user session load.
We solved the hard shutdown issue by implementing a Horizontal Pod Autoscaler
logic: priority Annotations
are assigned to the Pods
according to their current session load. Whenever a scale-down is being executed, the server with the lowest priority will be issued a soft-shutdown, and sessions will be slowly drained from the Pod
.
As the application work processes are producing several log files, a sidecar container is used to pull the logs and forward them to a log target for each Application Server
Pod
. This way log files are persisted, e.g. for root cause analysis after work process failures and subsequent container restarts.
Message Server and Enqueue Server
The Message Server
is a singleton instance per design, as well as the Enqueue Server
. For greater flexibility we created separate container images for each, but placed them inside one Pod
called ASCS
(ABAP SAP Central Services
).
Since it is necessary for an Application Server
to reach the Message Server
via a static DNS name, we placed the ASCS
Pod
in a StatefulSet
which makes it resolvable.
Since the Message Server
is basically stateless, a container restart is not critical. The Enqueue Server
keeps the lock table so it is not completely stateless. To implement high availability for the Enqueue Server
it is recommended to start a secondary enqueue server that keeps a copy of the lock table. This is known as Enqueue Replication
and could be achieved by creating another singleton Pod
. However, this was out of scope for this PoC so far.
SAProuter and Web Dispatcher
For accessing the system via SAP GUI
, the SAProuter
is able to connect a client to the correct application server. In contrast to the Kubernetes load balancers, the SAProuter
is aware of the proprietary SAP DIAG protocol and forwards connections to the corresponding sessions. The SAProuter
is stateless and can be scaled easily if necessary. It can be deployed as a Pod
, DaemonSet
or Deployment
.
The last component is the Web Dispatcher
which is a load balancer enhanced with proprietary security features and endpoint control. It is stateless and can be scaled up easily if needed. Since we needed only one Web Dispatcher
instance in our PoC, we bundled it together with the Message Server
and the Enqueue Server
into the same Pod
.
Note: It is possible to skip the Web Dispatcher
and use the Kubernetes load balancer to connect to the ICM
(Internet Communication Manager
) processes of the application server containers directly – this is however critical from a security perspective and would constitute a non-standard SAP setup.
Communication and Client Connectivity
After all relevant SAP components were organized in Kubernetes Pods
, we had to make sure that they can properly communicate with each other, as well as with external clients.
Services
Communication between Pods
in a Kubernetes cluster is done via Services
. Since Kubernetes does automatic port mapping on a Node
where multiple pods expose identical ports, this setup allows SAP application server scale-up on a single Node
without port conflicts.
Both the Application Server
Deployment
and the ASCS
StatefulSet
were encapsulated in Kubernetes Services.
Load Balancer
Dispatcher And Worker Nodes In Dmg Download
Connections from external clients (SAP GUI, web browser) to the Services is done via an external load balancer. The load balancer type depends on the underlying infrastructure that Kubernetes is running on. For this PoC we used OpenStack with a HAProxy
load balancer, as well as a bare-metal infrastructure. Deploying the load balancer requires API calls into the IaaS layer, so the IaaS-specific Kubernetes Cloud Provider Interface
(CPI) has to be configured. For simplicity, we used MetalLB as load balancer in the end. We successfully tested HAProxy
and a hardware load balancer, as well.
The external load balancer IP resp. its DNS-resolvable host name is the single entry point for all client communication.
The load balancer does not actually do as its name implies. In this setup it is just used as an external communication entry point. In fact, the load is distributed by the Web Dispatcher
and Message Server
using SAP Logon Groups.
Namespaces
Finally, we organized all SAP Kubernetes objects in a dedicated Kubernetes Namespace
‘sap’ for logical separation from other cluster artifacts.
Furthermore, multiple SAP instances could be deployed on a single cluster by assigning them to separate Namespaces, e.g. ‘sapqa’, ‘sapdev’, ‘sapprod’.
So here is a picture of how it all comes together:
In principal it is possible to run ABAP in a Kubernetes environment. It allows rapid and flexible deployments especially for test, development and training systems. Due to the comprehensive architecture of the ABAP application server components, some challenges and overlaps with Kubernetes functionalities exist and must be addressed accordingly (e.g. load balancing, name resolution, lifecycle).
Benefits
No installation procedure required
Thanks to the pre-built container images there is no need to install every new ABAP instance as traditionally done on bare metal hardware or virtual machines. We just provide a collection of Kubernetes deployment files and some container images which are dynamically deployed within the Kubernetes cluster.
(Re-)Deploy ABAP instances in a matter of seconds
Once a container image is downloaded and cached, Kubernetes will bootstrap complete ABAP systems in a very short amount of time. All ABAP containers will be orchestrated across the available Kubernetes Worker Nodes
automatically whenever there is a service disruption (e.g. Ni no3 2 h-dmg. hardware outage).
Scale a small system large with just one click
Separating the scalable Application Server
from the ASCS by placing them in dedicated containers allows for spinning up multiple SAP dialog instances with one command or one click. Because of the encapsulated design of dialog instances and the usage of virtual service endpoints in Kubernetes, scaling up ABAP systems is pretty easy.
Auto-Scaling of Application Servers
Kubernetes standard features include automatic scaling of Pods
based on CPU utilization or memory pressure. These auto-scaling functions can be leveraged to elastically scale an ABAP system when detecting very high or low load. Shared hardware resources in the customers data center can then be utilized more efficiently, especially for non-productive systems without any live intervention of an administrator.
Deployment of multiple, adjacent landscapes
Another benefit is the simple and fast deployment of multiple ABAP instances in the same environment. It is possible to spin up an ABAP instance either in a single Kubernetes cluster, or to share a Kubernetes cluster with multiple ABAP instances. All ABAP instances will be available via load balancer addresses provided by the underlying infrastructure (on-premise/self-managed or public cloud). Kubernetes also takes care of port mapping and avoids conflicts between SAP instances with identical ports on the same Node
, by assigning unique intermediate ports.
Challenges
Auto-Scaling vs. Session-Stickiness
The ABAP architecture keeps user session contexts on one specific dialog instance server during the whole user session until either the user logs off or the session reaches a timeout. A scale-down of dialog instance servers can lead to terminated user sessions.
Furthermore, batch processes – which also live on dialog instance servers – must not be terminated. In our PoC we solved this through a prioritization mechanism to determine which container can be terminated.
Load-Balancing Mechanisms
One benefit of Kubernetes is the built-in load balancing between Worker Nodes
. However, ABAP provides its own load-balancing and enqueue mechanisms according to the used access method (SAP GUI, Web GUI, RFC, …). Thus, there is a function overlap, and Kubernetes load-balancing can only be used in a limited manner.
Raised Complexity for System Connectivity
Dispatcher And Worker Nodes In Dmg 1
Containerization and the underlying infrastructure platform add multiple network layers, so accessing the SAP system from a client (SAP GUI, browser) is more complex than accessing bare-metal systems. On the other hand, Kubernetes tools give the ability to permanently check the system availability and network perfomance to identify issues.
Database with compatible SAP NetWeaver or S/4HANA Content required
The database holds ABAP programs, all the business logic and all customer data. To connect a containerized ABAP system with a specific kernel, a compatible SAP HANA database with the correct initial database load is required.
Application specific requirements
We assume that the SAP application server and its ABAP applications may have further requirements, e.g. web service endpoints, remote system connections or mobile application connectivity. There are also implicit assumptions about the underlying infrastructure, e.g. hardware IDs for the SAP license key; Linux kernel parameter values, etc.
-->Applies To: Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2, Microsoft Dynamics AX 2012 Feature Pack, Microsoft Dynamics AX 2012
A dispatch board is the central location from which you can view the status of activities in a service order. You can perform the following tasks in a dispatch board:
Filter and view service activities in a certain date range.
Identify the priority of a service activity, and distinguish among service priority levels based on a color scheme that you set up.
Review the workers that are assigned to a dispatch team.
Modify the service times and assigned technicians for a service activity.
Review the list of service activities that have not been dispatched.
For information about how to perform specific tasks in a dispatch board, see the topics in the See also section.