Sunday, September 19, 2010

Operating system Basic and Systems Programming

Introduction to Systems Programming with examples in POSIX

Process
A process can be thought of as a program whose execution has started and not yet terminated.
A process can be in three states:

1.Running: The process is using a processor to execute instructions.

2.Ready: The process is executable, but other processes are executing and all processors are currently in use.

3.Blocked: The process is waiting for an event to occur.
Each process uses a Process Control Block (PCB) data structure to store complete information about the process.  PCB contains the following information:

4.Process name and id, processor state (program counter, register contents, interrupt masks, etc.), process state, priority, privileges, virtual memory address translation maps, information about other resources owned by the process, etc.
 Concurrent Processes
Two processes are concurrent if their execution can overlap in time.


5.In multiprocessor systems, concurrent processes can use different CPUs at the same time.

6.In single processing systems, concurrent processing can be thought of as one process using the CPU while another process uses system I/O, etc.  If a CPU interleaves the execution of several processes, logical concurrency is obtained.

7.Processes are serial if the execution of one must be complete before the execution of the other can start.
Concurrent processes interact through the following mechanisms:

8.Shared variables.

9.Message passing.
 Threads
Threads are mini processes within a process and therefore share the address space with the processes.
Every process has at least one thread.
A thread executes a portion of the program and cooperates with other threads concurrently executing within the same process and address space.
Because threads share the same address space, it is more efficient to perform a context switch between two threads of the same process than it is to perform a context switch between two separate processes.
Each thread has it’s own program counter and control block.
 Critical Sections
In multiprocessing systems, processes or threads may compete with each other for system resources that can only be used exclusively.

10.CPU is a good example of this type of resource as only one process or thread can execute instructions on a single CPU at a time.
Processes may also cooperate with each other by using shared memory or message passing.
For both of these cases, mutual exclusion is necessary to insure that only one process or thread at one time holds a resource or modify shared memory.
A critical section is a code segment in a process in which some shared resource is accessed.
Mutual Exclusion
A solution to the mutual exclusion problem must satisfy the following requirements:

11.Only one process can execute its critical section at any one time.

12.When no process is executing in its critical section, any process that requests entry to its critical section must be permitted to enter without delay.

13.When two or more processes compete to enter their respective critical sections, the selection cannot be postponed indefinitely (Deadlock).

14.No process can prevent any other process from entering its critical section indefinitely (Starvation).
 Some Mechanisms for Insuring Mutual Exclusion

  • Polling or Busy waiting
  • Disabling interrupts
  • Semaphores
 Semaphores
A semaphore is an integer variable S and an associated group of waiting processes for which only two operations may be performed:
Wait state P(S):  if S  1 then S := S – 1
else the executing process waiting for the semaphore S joins the wait queue for S and forfeits the resource;
endif
Signal state V(S):  if wait queue for S is not empty then remove a process from the queue and give it the resource
else S := S + 1
endif

Semaphores (cont.)
A semaphore can be used for synchronization and mutual exclusion.

15.Synchronization: Coordination of various tasks or processes.

16.Mutual exclusion: Protection of one or more resources or implementation of critical sections among several competing processes sharing the critical section.
There are three types of semaphores:

17.Binary semaphores

18.Mutual exclusion semaphores (mutex)

19.Counting semaphores
Binary Semaphores

20.Simplest form of the semaphore.

21.Primarily used for synchronization.  Can also be used for mutual exclusion.

22.Using a binary semaphore, a process can wait (block or pend) on an event which triggers the release of a semaphore.

23.Should be used in place of polling.
- Polling is an inefficient way of synchronization or event waiting because of the use of processor resources.
- Using semaphores, the process is in a blocked or pended state while other processes use the processor.
- Upon the occurrence of the event, the pended process becomes ready to execute.
Mutual Exclusion Semaphores

24.Used to provide resource consistency in a system for resources shared simultaneously by multiple tasks.
- Shared resources can be such as shared data, files, hardware, devices, etc.

25.To protect against inconsistency in a multi-tasking system, each task must obtain exclusive access right to a shared resource.

26.One common problem that can occur without designing proper mutual exclusion in a multi-tasking system sharing resources simultaneously is known as the “Racing condition”.
- Based on which task gets to the resource first the system behavior is different.
- Racing conditions can go on undetected for a very long time in a system and can be a very difficult problem to detect and fix.

27.When using mutual exclusion semaphores, be aware of the racing condition; deadlock.
- Use only one mutual exclusion semaphore to protect resources that are used together.

28.When using mutual exclusion semaphores, avoid starvation situations.
- Do not include unnecessary sections in the critical section.

29.Mutual exclusion semaphores can not be used in interrupt service routines.
Counting Semaphores

30.Yet another mean to implement task synchronization and mutual exclusion.

31.Counting semaphore is just like the binary semaphore with the exception that it keeps track of the number of times a semaphore has been given or taken.
- Counting semaphores use a counter to keep track of the semaphores and hence the name, counting semaphore!
- The semaphore is created with a fixed number of semaphores available.
- Every time that a semaphore is taken, the semaphores count is decremented.
- When a semaphore is given back, if a task is blocked on the semaphore, it takes the semaphore.  Otherwise, the semaphores count is incremented.

32.Counting semaphores are generally used to guard resources with multiple instances.
 Classical Synchronization Problems
The dining philosophers problem
The producer-consumer problem
The readers-writes problem

33.Reader’s priority
- Weak reader’s priority

34.Writer’s priority
- Weak writer’s priority
POSIX Semaphores

35.There are two types of POSIX Semaphores, named and unnamed semaphores.
Both types of POSIX semaphores have exactly the same properties and behave in exactly the same manor.
The two different types, have different API’s and interfaces when creating/opening/destroying the semaphores.

36.For named semaphores, a symbolic name is assigned to a semaphore and all references to that semaphore are made using the symbolic name.

POSIX Semaphores routines:
37.sem_init()  Initialize an unnamed semaphore.
The API:
int sem_init ( sem_t * sem, int pshared, unsigned int value );
sem: Semaphore descriptor of the unnamed semaphore to be initialized.
pshared: Whether semaphore can be usable between multiple processes, or merely within the calling process.  Unless you have a system with threads, you will always set this to 1.  If set to 0, then multiple processes may not be able to use the resulting semaphore.
value:  Value of the initialized semaphore.  The initial value can not be negative.
Returns: OK (0), ERROR (-1).

38.sem_destroy() Destroy an unnamed semaphore.
Must be careful when dealing with this call especially when semaphore is used for mutual exclusion.
A good design will assure that no other task has locked the semaphore, before destroying the semaphore.
One way of assuring this is to adopt the protocol of only deleting semaphores that the deleting task has successfully locked.
The API:
int sem_destroy ( sem_t * sem );
sem: Semaphore descriptor.
Returns: OK (0), ERROR (-1).

POSIX Semaphores routines (continued):

39.sem_open()  Initialize/open a named semaphore.
The API:
sem_t sem_open (const char* name, int oflag, mode_t mode, unsigned int value);
name:  Semaphore name.
oflag: O_CREAT (required the following 3rd and 4th calling parameters)
mode_t mode:  Three triplets of permissions are set for semaphores: read, write for creator, group, and others (refer to sys/stat.h for portable symbols).
unsigned int value:  Initial semaphore value.  Must be  SEM_VALUE_MAX.
   O_EXCL  used in conjunction with O_CREAT.  When O_EXCL | O_CREAT is used, call will fail if semaphore name exists.
Returns:  A pointer to sem_t, ERROR (-1).

40.sem_close()  Close a named semaphore.
Deallocates system resources allocated by the system for use by the task for the named semaphore.
Semaphore must have been removed using the call sem_unlink () for sem_close () to have an effect.
The API:
int sem_close ( sem_t * sem );
sem:  Semaphore descriptor.
Returns:  OK (0), ERROR (-1).

POSIX Semaphores routines (continued):

41.sem_unlink()  Remove a named semaphore.
Removes the string name from the semaphore name table, and marks the corresponding semaphore for destruction.
Semaphore is destroyed when the last task closes it with sem_close ().
The API:
int sem_unlink ( const char * name );
name:  Semaphore name.
Returns:  OK (0), ERROR (-1).


POSIX Semaphores routines (continued):

42.sem_wait()  Locks (takes) a semaphore, causing the calling task to block if unavailable.
Deadlock detection is not implemented.
The API:
int sem_wait ( sem_t * sem );
sem:  Semaphore descriptor.
Returns:  OK (0), ERROR (-1).

43.sem_trywait()  Locks (takes) a semaphore, returning error if unavailable.
Deadlock detection is not implemented.
The API:
int sem_trywait ( sem_t * sem );
sem:  Semaphore descriptor.
Returns:  OK (0), ERROR (-1).

44.sem_post()  Unlocks (gives) a semaphore.
The API:
int sem_post ( sem_t * sem );
sem:  Semaphore descriptor.
Returns:  OK (0), ERROR (-1).

POSIX Semaphores routines (continued):

45.sem_getvalue()  get the value of a semaphore.
The API:
int sem_getvalue ( sem_t * sem, int * sval );
sem:  Semaphore descriptor.
sval:  Buffer for which the semaphore value is returned.
Returns:  OK (0), ERROR (-1).


Message Queues

46.Used for inter-task (inter-process) communication (IPC)

47.Not as fast as semaphores but allows cooperating tasks to communicate with each other.

48.Allows messages to be queued and sent/received to/from tasks/interrupt service routines.

49.Multiple tasks can send/receive to/from a common message queue.
POSIX Message queues

50.POSIX message queues provide named queues.

51.POSIX message queues provide a range or message priorities (0 the lowest to 31 the highest priority).

52.POSIX message queues allow a routine to request notification when a message for it arrives at an empty queue, therefore avoiding task block or polling to wait for a message.  POSIX signals are used to achieve this.


POSIX Message queue routines:

53.mq_open()  open a message queue.
The API:
mqd_t mq_open ( const char * mqName, int oflag, mode_t mode, struct mq_attr * pAttr );
mqName:  Message queue name.  Must begin with “/” and contain full path.
oflag: O_CREAT (requires the following 3rd and 4th calling parameters)
mode_t mode:  Mode.
mq_attr *pAttr:  NULL, mq_maxmsg, mq_msgsize.
   O_EXCL  (used in conjunction with O_CREAT.  When O_EXCL | O_CREAT is used, call will fail if message queue name exists.)
 O_RDONLY (open the message queue for read only.)
 O_WRONLY (open the message queue for write only.)
 O_RDWR (open the message queue for read and write.)
 O_NONBLOCK (opened message queue can be used in a blocking or non blocking mode.)
When message queue is used in non-blocking mode, upon send, it will not block with a full message queue.  Upon receive, it will not block with an empty message queue.
Returns:  A pointer to message queue descriptor, mqd_t, ERROR (-1).

POSIX Message queue routines:

54.mq_close()  Close a message queue.
Deallocates system resources allocated by the system for use by the task for the message queue.
Message queue must have been removed using the call mq_unlink () for mq_close () to have an effect.
The API:
int mq_close ( mqd_t * mq );
mq:  Message queue descriptor.
Returns:  OK (0), ERROR (-1).

POSIX Message queue routines (continued):

55.mq_unlink()  Remove a message queue.
Removes the string name from the message queue name table, and marks the corresponding message queue for destruction.
Message queue is destroyed when the last task closes it with mq_close ().
The API:
int mq_unlink ( const char * name );
name:  Message queue name.
Returns:  OK (0), ERROR (-1).

POSIX Message queue routines (continued):

56.mq_send()  Sends a message to the message queue.
The API:
int mq_send ( mqd_t * mqdes, const void *pMsg, size_t msgLen, int msgPrio );
mqdes:  Message queue descriptor.
pMsg:  Pointer to the message.
msgLen:  Size of message in bytes. Value must be less than or equal to the mq_msgsize attribute of mq_attr.
msgPrio:  Message priority.  Message with a higher priority value is inserted before message with lower value. The value of msgPrio must be less than or equal to MQ_PRIO_MAX.
Returns:  OK (0), ERROR (-1).

57.mq_receive()  Receives the oldest of the highest priority message from a message queue.
The API:
ssize_t mq_receive ( mqd_t * mqdes, void *pMsg, size_t msgLen, int *pMsgPrio );
mqdes:  Message queue descriptor.
pMsg:  Pointer to buffer to receive the message.
msgLen:  Size of buffer in bytes. Value must be less than or equal to the mq_msgsize attribute of mq_attr.
msgPrio:  Received message priority if not NULL.
Returns:  OK (0), ERROR (-1).
POSIX Message queue routines (continued):

58.mq_notify()  Notify a task that a message is available on a queue.
The API:
int mq_notify ( mqd_t mqdes, const struct sigevent * pNotification );
mqdes:  Message queue descriptor.
pNotification:  real-time signal.
Returns:  OK (0), ERROR (-1).

POSIX Message queue routines (continued):

59.mq_setattr() can be used to set the message queue attribute:
mq_flags O_NONBLOCK

60.mq_getattr() can be used to get the message queue attribute:

  • mq_flags O_NONBLOCK
  • mq_maxmsg
  • mq_msgsize
  • mq_curmsgs


POSIX Message queue routines (continued):

61.mq_setattr()  Set message queue attributes.
The API:
int mq_setattr ( mqd_t mqdes, const struct mq_attr * pMqStat,
struct mq_attr * pOldMqStat );
mqdes:  Message queue descriptor.
pMqStat:  New attribute.
pMqStat:  Old attribute.
Returns:  OK (0), ERROR (-1).

62.mq_getattr()  Get message queue attributes.
The API:
int mq_getattr ( mqd_t mqdes, const struct mq_attr * pMqStat );
mqdes:  Message queue descriptor.
pMqStat:  Buffer in which to return attributes.
Returns:  OK (0), ERROR (-1).


POSIX Pipes

63.An alternative interface to the message queue facility.
Functionally, pipes are very similar to message queues.
Tasks block when they read from an empty pipe or write to a full pipe.
Much like message queues, Interrupt Service Routines (ISR) can write to a pipe.
Like message queues ISRs can not read from a pipe.

64.Uses the I/O system to perform communication via pipes.
Uses the standard I/O routines, open(), read(), write(), ioctl(), select().

65.It uses the I/O driver.


Shared memory

66.Tasks may use shared memory segments to communicate.

67.Semaphores must be used to provide mutual exclusion or synchronization for shared memory segments.
POSIX Shared memory routines:

68.shm_open()  open a message queue.
The API:
int shm_open ( const char * shmName, int oflag, mode_t mode );
shmName:  Shared memory name.  Must begin with “/” and contain full path.
oflag: O_CREAT (requires the following 3rd and 4th calling parameters)
mode_t mode:  Mode.
   O_EXCL  (used in conjunction with O_CREAT.  When O_EXCL | O_CREAT is used, call will fail if shared memory name exists.)
   O_TRUNC  (only used with O_RDWR and sets the shared memory object’s size to zero.)

 O_RDONLY (open the shared memory for read only.)
 O_RDWR (open the shared memory for read and write.)
Returns:  A pointer to shared memory descriptor, int, ERROR (-1).
Note:  Shared memory objects are created with a size of zero.  They are sized using ftruncate and mapped in using mmap.

POSIX Shared memory routines (continued):
69.shm_unlink()  Unlinks a shared memory object.
Removes the string name from the shared memory name table.
Message queue is destroyed when the last task closes it with close ().
The API:
int mq_unlink ( const char * name );
name: Shared memory name.
Returns:  OK (0), ERROR (-1).

Network System and Network Programming

The network system allows users to create distributed applications across a computer network (i.e. over Ethernet mediums).

70.UDP and TCP protocols.

Remote login (rlogin) and remote file access (RSH, FTP, TFTP and NFS).
Network System and Network Programming

Socket Overview.

71.Provides the programmer with a direct interface to the Internet protocol suite.

72.A protocol (TCP or UDP) must be selected at socket creation time.

73.Server binds its socket to a pre-selected known port number.

74.Client will then get a port number assigned to it dynamically by the operating system.

Network System and Network Programming

IP: Internet Protocol.

75.IP is the network protocol.

Each network computer has a unique four-byte Internet address.
IP accepts and delivers packets addressed to a particular host or gateway.
IP breaks up and puts back together packets to fit the packet size of the physical network (i.e. Ethernet, Fast Ethernet, FDDI, Token Ring, etc.).
Network System and Network Programming

UDP: User Datagram Protocol.

76.Provides a simple but fast datagram-based communication mechanism.

77.Accepts messages addressed to a particular computer and delivers it to a designated port.
Each computer can have multiple port addresses in addition to network (IP) address.

78.UDP does not guarantee that the messages are delivered correctly or even delivered at all.
May be viewed as unreliable.
Network System and Network Programming

TCP: Transmission Control Protocol.

79.Slower than UDP.

80.Reliable and provides flow-control and two-way communication.

81.TCP guarantees that the data is delivered correctly, in the proper order, and without duplication.

82.Before data can be exchanged, a connection is established between the processes/computers involved.

83.Also includes a port address in addition to the network address.


Network System and Network Programming

TCP (connection-oriented) Socket client/server setup:

Task 1 (server) calls
Task 2 (client) calls
Function
socket()
socket()
Create socket – must determine socket protocol type.
bind()
N/A
Assign address to socket – binding not required on the client side as port number is dynamically assigned on the client side.
listen()
N/A
Allow others to connect.
N/A
connect()
Request connection to socket address.
accept()
N/A
Complete socket connection.
write()
write()
Send data to the other socket.
read()
read()
Receive data from the other socket.
close()
close()
Close socket.

Network System and Network Programming

UDP (connection less) Socket client/server setup:

Task 1 (server) calls
Task 2 (client) calls
Function
socket()
socket()
Create socket – must determine socket protocol type.
bind()
N/A
Assign address to socket.
recvfrom()
sendto()
Allow communication.
sendto()
recvfrom()
Allow communication.

Network System and Network Programming

BSD Sockets.

84.socket( )  open a socket.
The API:
int socket ( int domain, int type, int protocol );
domain:   Address family.  AF_UNIX (Unix internal protocols – not supported by VxWorks),
AF_INET (Internet protocols – supported by VxWorks),
AF_NS (Xerox NS protocols – not supported by VxWorks),
AF_IMPLINK (IMP link layer).
type:  Socket type.  SOCK_STREAM (connection-based or TCP stream socket),
   SOCK_DGRAM (datagram or UDP socket),
   SOCK_RAW (raw socket).
protocol:  usually 0.
Returns:  A socket descriptor (standard I/O file descriptor), ERROR (-1).

Network System and Network Programming

BSD Sockets (continued).

85.bind( )  bind a name to socket.
The API:
STATUS bind ( int s, struct sockaddr *name, int namelen );
s:   Socket descriptor.
name: Pointer to name to be bound.
namelen:  Length of name.
Returns:  OK (0), ERROR (-1).

86.listen( )  Enable connection to a socket.
The API:
STATUS listen ( int s, int backlog );
s:   Socket descriptor.
backlog: Maximum number of unaccepted connections to queue.
Returns:  OK (0), ERROR (-1).

Network System and Network Programming

BSD Sockets (continued).

87.connect( )  Connect to a socket.
The API:
STATUS connect ( int s, struct sockaddr *name, int namelen );
s:   Socket descriptor.
name: Pointer to name to be bound.
namelen:  Length of name in bytes.
Returns:  OK (0), ERROR (-1).

88.accept( )  Accept a connection from a socket.
The API:
int accept ( int s, struct sockaddr *addr, int *addrlen );
s:   Socket descriptor.
addr: Pointer to peer address.
addrlen: Pointer to address length.
Returns:  A socket descriptor, ERROR (-1).

Network System and Network Programming

BSD Sockets (continued).

89.recvfrom( )  Receive a message from a socket.
The API:
int recvfrom ( int s, char *buf, int bufLen, int flags, struct sockaddr *from,
 int *pFromLen );
s:   Socket descriptor.
buf:  Pointer to data buffer.
bufLen:  Length of buffer.
flags: Flags to underlying protocols.
from:  Pointer to sender’s address.
pFromLen:  Pointer to length of from.
Returns:  Number of bytes received, ERROR (-1).

90.sendto( ) Send a message to a socket.
The API:
int sendto ( int s, c_addr_t buf, int bufLen, int flags, struct sockaddr *t,
int tolen );
s:   Socket descriptor.
buf:  Pointer to data buffer.
bufLen:  Length of buffer.
flags: Flags to underlying protocols.
to:  Pointer to recipient’s address.
toLen:  Length of to sockaddr.
Returns:  Number of bytes sent, ERROR (-1).

No comments:

Post a Comment