Professional Documents
Culture Documents
MESSAGES MESSAGES
BUFFERED FOR BUFFERED
MESSAGES IN TRANSIT
TRANSMISSION AWAITING
PROCESSING
Motivation
•Many networked applications require a means to manipulate messages
efficiently, e.g.:
•Storing messages in buffers as they are received from the network or
from other processes
•Adding/removing headers/trailers from messages as they pass through a
user-level protocol stack
•Fragmenting/reassembling messages to fit into network MTUs
•Storing messages in buffers for transmission or retransmission
1
•Reordering messages that were received out-of-sequence
The ACE_Message_Block Class (2/2)
Class Capabilities
•This class is a composite that enables efficient manipulation of messages via
the following operations:
•Each ACE_Message_Block contains a pointer to a reference-counted
ACE_Data_Block which in turn points to the actual data associated with a
message
•It allows multiple
messages to be
chained together into a
composite message
•It allows multiple
messages to be joined
together to form an
ACE_Message_Queue
•It treats
synchronization &
memory management
2 properties as aspects
Two Kinds of Message Blocks
public:
ACE_UINT type () const;
ACE_UINT pid () const;
const ACE_Time_Value timestamp () const;
const char *msg_data () const;
};
8
Using ACE_OutputCDR
• We show the ACE CDR insertion (operator<<) & extraction (operator>>)
operators for ACE_Log_Record that's used by client application & logging server
Format Action
%l Displays the line number where the error occurred
%N Displays the file name where the error occurred
%n Displays the name of the program
%P Displays the current process ID
%p Takes a const char * argument and displays it and the
error string corresponding to errno (similar to perror())
%T Displays the current time
%t Displays the calling thread’s ID
15
Implementing the Client Application (5/6)
Use the ACE_SOCK_Connector wrapper façade to connect to the logging
server
21 ACE_SOCK_Connector connector;
22 Logging_Client logging_client;
23
24 if (connector.connect (logging_client.peer (),
25 server_addr) < 0)
26 ACE_ERROR_RETURN ((LM_ERROR,
27 "%p\n",
28 "connect()"),
29 1);
30
31 // Limit the number of characters read on each record.
32 cin.width (ACE_Log_Record::MAXLOGMSGLEN);
16
Implementing the Client Application (6/6)
33 for (;;) {
34 std::string user_input;
35 getline (cin, user_input, '\n');
36
37 if (!cin || cin.eof ()) break; Create log_record
38
39 ACE_Time_Value now (ACE_OS::gettimeofday ());
40 ACE_Log_Record log_record (LM_INFO, now,
41 ACE_OS::getpid ());
42 log_record.msg_data (user_input.c_str ());
43
44 if (logging_client.send (log_record) == -1)
45 ACE_ERROR_RETURN ((LM_ERROR,
46 "%p\n", "logging_client.send()"),
1);
47 } Send log_record to logging server
48
49 return 0; // Logging_Client destructor
50 // closes TCP connection.
51 }
17
The Logging_Server Classes
The figure below illustrates our Logging_Server abstract base class, the
Logging_Handler class we'll describe shortly, & the concrete logging server
classes that we'll develop in subsequent sections of the tutorial
18
Implementing the Logging_Server (1/5)
•This example uses the ACE_Message_Block & ACE CDR classes in a
common base class that simplifies logging server implementations in
the examples
// Forward declaration.
class ACE_SOCK_Stream;
Header file “Logging_Server.h”
class Logging_Server
{
public:
// Template Method that runs logging server's event loop.
virtual int run (int argc, char *argv[]);
protected:
// The following four methods are ``hooks'' that can be
// overridden by subclasses.
virtual int open (u_short logger_port = 0);
virtual int wait_for_multiple_events () { return 0; }
virtual int handle_connections () = 0;
virtual int handle_data (ACE_SOCK_Stream * = 0) = 0;
19
Sidebar: Template Method Pattern
• Intent
Abstract Class
• Define the skeleton of an
template_method ();
algorithm in an operation, hook_method_1();
deferring some steps to hook_method_1(); ...
subclasses hook_method_1();
...
• Context hook_method_2();
...
• You have a fixed algorithm
structure with variations
possible for individual steps
• Problem Concrete Class 1
• You want to plug in & out steps hook_method_1();
of the algorithm without hook_method_2();
changing the algorithm itself
• Solution
Concrete Class 2
• Define a fixed base class
function that calls virtual “hook” hook_method_2();
methods that derived classes
20
can override
Implementing the Logging_Server (2/5)
Header file “Logging_Server.h” (cont’d)
// Accessor.
ACE_SOCK_Acceptor &acceptor () {
return acceptor_;
}
private:
// Socket acceptor endpoint.
ACE_SOCK_Acceptor acceptor_;
};
21
Implementing the Logging_Server (3/5)
Implementation file “Logging_Server.cpp”
#include "ace/FILE_Addr.h"
• Template method providing the
#include "ace/FILE_Connector.h"
skeleton of the algorithm to use
#include "ace/FILE_IO.h"
#include "ace/INET_Addr.h" • Hook methods will be overridden by
#include "ace/SOCK_Stream.h" subclasses unless default is ok to
#include "Logging_Server.h" use
ACE_INET_Addr server_addr;
int result;
if (logger_port != 0)
result = server_addr.set (logger_port, INADDR_ANY);
else
result = server_addr.set ("ace_logger", INADDR_ANY);
if (result == -1) return -1;
ACE_FILE_Connector connector;
return connector.connect (logging_file,
ACE_FILE_Addr (filename.c_str ()),
0, // No time-out.
Create the log file using the ACE_Addr::sap_any, // Ignored.
ACE_FILE_Connector factory 0, // Don't try to reuse the addr.
O_RDWR|O_CREAT|O_APPEND,
ACE_DEFAULT_FILE_PERMS);
}
24
Sidebar: The ACE File Wrapper Facades
25
Implementing the Logging_Handler (1/6)
class Logging_Handler
{ This class is used by the
protected: logging server to encapsulate
// Reference to a log file. the I/O & processing of log
ACE_FILE_IO &log_file_; records
26
Implementing the Logging_Server (2/6)
Header file “Logging_Handler.h” cont’d
// Receive one log record from a connected client. <mblk>
// contains the hostname, <mblk->cont()> contains the log
// record header (the byte order and the length) and the data.
int recv_log_record (ACE_Message_Block *&mblk);
// Write one record to the log file. The <mblk> contains the
// hostname and the <mblk->cont> contains the log record.
int write_log_record (ACE_Message_Block *mblk);
27
Implementing the Logging_Server (3/6)
1. Receive incoming data & use the input CDR class to parse header
2. Then payload based on the framing protocol &
3. Finally save it in an ACE_Message_Block chain
1 int Logging_Handler::recv_log_record (ACE_Message_Block *&mblk)
2 {
3 ACE_INET_Addr peer_addr; First save the peer hostname
4 logging_peer_.get_remote_addr (peer_addr);
5 mblk = new ACE_Message_Block (MAXHOSTNAMELEN + 1);
6 peer_addr.get_host_name (mblk->wr_ptr (), MAXHOSTNAMELEN);
7 mblk->wr_ptr (strlen (mblk->wr_ptr ()) + 1); // Go past name.
8
9 ACE_Message_Block *payload =
10 new ACE_Message_Block (ACE_DEFAULT_CDR_BUFSIZE);
11 // Align Message Block for a CDR stream.
12 ACE_CDR::mb_align (payload); Force proper alignment
13
14 if (logging_peer_.recv_n (payload->wr_ptr (), 8) == 8) {
15 payload->wr_ptr (8); // Reflect addition of 8 bytes.
16 Receive the header info (byte
17 ACE_InputCDR cdr (payload); order & length)
18
28
Implementing the Logging_Server (4/6)
19 ACE_CDR::Boolean byte_order; Demarshal header info
20 // Use helper method to disambiguate booleans from chars.
21 cdr >> ACE_InputCDR::to_boolean (byte_order);
22 cdr.reset_byte_order (byte_order);
23
24 ACE_CDR::ULong length; Resize message block to be the right size for
25 cdr >> length; payload & that’s aligned properly
26
27 payload->size (8 + ACE_CDR::MAX_ALIGNMENT + length);
28
29 if (logging_peer_.recv_n (payload->wr_ptr(), length) > 0) {
30 payload->wr_ptr (length); // Reflect additional bytes.
31 mblk->cont (payload); // Chain the header and payload.
32 return length; // Return length of the log record.
33 }
34 }
35 payload->release (); On error, free up allocated message
36 mblk->release (); blocks
37 payload = mblk = 0;
38 return -1;
39 }
29
Implementing the Logging_Server (5/6)
1. Send the message block chain to the log file, which is stored in binary format
2. If debug flag is set, print contents of the message
31
Iterative Logging Server
•This is the simplest possible logging server implementation
Logging_Handler &logging_handler () {
return logging_handler_;
}
protected:
ACE_FILE_IO log_file_;
Logging_Handler logging_handler_;
// Other methods shown below...
};
33
Implementing the Iterative_Logging_Server (2/3)
virtual int open (u_short logger_port) {
if (make_log_file (log_file_) == -1)
ACE_ERROR_RETURN ((LM_ERROR, "%p\n", "make_log_file()"),
-1);
return Logging_Server::open (logger_port);
}
Override & “decorate” the Logging_Server::open() method
Override the
handle_connections()
virtual int handle_connections () { hook method to handle one
ACE_INET_Addr logging_peer_addr; connection at a time
if (acceptor ().accept (logging_handler_.peer (),
&logging_peer_addr) == -1)
ACE_ERROR_RETURN ((LM_ERROR, "%p\n", "accept()"), -1);
ACE_DEBUG ((LM_DEBUG, "Accepted connection from %s\n",
logging_peer_addr.get_host_name ()));
return 0;
}
34
Implementing the Iterative_Logging_Server (3/3)
virtual int handle_data (ACE_SOCK_Stream *) {
while (logging_handler_.log_record () != -1)
continue;
Delegate I/O to Logging_Handler
logging_handler_.close (); // Close the socket handle.
return 0;
}
36
Iterative vs. Concurrent Servers
simplify programming
<<dequeue/enqueue>> <<interrupt>>
without unduly reducing Async
Service External
performance Layer Async Service
Event Source
notification
read()
work()
message
message
notification
enqueue()
read()
work()
message
• This pattern defines two service • The pattern allows sync services
processing layers—one async & (such as processing log records
one sync—along with a queueing from different clients) to run
layer that allows services to concurrently, relative both to each
exchange messages between other & to async/reactive services
the two layers (such as event demultiplexing)
41
Drawbacks with Half-Sync/Half-Async
Problem Worker Worker Worker
• Although Half-Sync/Half-Async Thread 1 Thread 2 Thread 3
threading model is more
<<get>>
scalable than the purely reactive <<get>> <<get>>
model, it is not necessarily the Request Queue
most efficient design <<put>>
42
The Leader/Followers Pattern
demultiplexes
•The Leader/Followers architectural Thread Pool
pattern is more efficient than Half- synchronizer
Sync/Half-Async join()
promote_new_leader()
•Multiple threads take turns sharing *
event sources to detect, demux, uses
Event Handler
dispatch, & process service requests Handle handle_event ()
get_handle()
that occur on the event sources *
•This pattern eliminates the need for—&
Handle Set
the overhead of—a separate Reactor Concrete Event
handle_events()
thread & synchronized request queue Handler B
deactivate_handle()
used in the Half-Sync/Half-Async reactivate_handle() handle_event ()
select() get_handle()
pattern
Handles Concrete Event
Concurrent Handles Iterative Handles Handler A
Handle Sets handle_event ()
get_handle()
Concurrent UDP Sockets + TCP Sockets +
Handle Sets WaitForMultipleObjects() WaitForMultpleObjects()
1.Leader join()
thread handle_events()
demuxing join() event
handle_event()
2.Follower deactivate_
thread thread 2 sleeps
until it becomes
handle()
promote_
promotion the leader new_leader()
thread 2
3.Event waits for a
new event, handle_
handler thread 1 events() reactivate_
processes handle()
demuxing & current
event
event
processing join() event
44
Thread-per-Request On-demand Spawning Strategy
•On-demand spawning creates a new process or thread in response to the
arrival of client connection and/or data requests
•Typically used to implement the thread-per-request and thread-per-
connection models
• Process contention scope (aka “user • System contention scope (aka “kernel
threading”) where threads in the same threading”) where threads compete
process compete with each other (but not directly with other system-scope threads,
directly with threads in other processes) regardless of what process they’re in
46
The N:M Threading Model
•Some operating systems
(such as Solaris) offer a
combination of the N:1 &
1:1 models, referred to as
the ``N:M'‘ hybrid-
threading model
•When an application
spawns a thread, it can
indicate in which
contention scope the
thread should operate
•The OS threading library
creates a user-space •When the OS kernel blocks an LWP, all user
thread, but only creates a threads scheduled onto it by the threads
kernel thread if needed or library also block
if the application explicitly •However, threads scheduled onto other
requests the system LWPs in the process can continue to make
contention scope progress
47
Task- vs. Message-based Concurrency Architectures
•A concurrency architecture is a binding between:
•CPUs, which provide the execution context for application code
•Data & control messages, which are sent & received from one or
more applications & network devices
•Service processing tasks, which perform services upon messages as
they arrive & depart
•Task-based concurrency
architectures structure
multiple CPUs according
to units of service
functionality in an
application
•Message-based
concurrency architectures
structure the CPUs
according to the messages
received from applications
48
& network devices