Àá½Ã¸¸ ±â´Ù·Á ÁÖ¼¼¿ä. ·ÎµùÁßÀÔ´Ï´Ù.
KMID : 0366519960160010305
Annual Bulletin Seoul Health
1996 Volume.16 No. 1 p.305 ~ p.321
A Study on the Technical Development of Distributed Processing System
Rho Kyung-Taeg

Abstract
The distributed operating system provides a collection of mechanisms upon which varying resource management policies can be implemented to meet local requirements, and to take advantage of technological improvements. This infrastructure allows severs to encapsulate and protect resources, while allowing clients to share them concurrently. There are two main approaches to kernel architecture : monolithic kernels and microkernels. The main difference between them lies in where the line is drawn between resource management by the kernel and resource management performed by dynamically-loaded(and usually user-level) servers. A micro kernel must support at least a notion of process and interprocess communication. It supports operating system emulation subsystems as well as language support and other subsystems, such as those for real-time processing. A process consists of an execution environment and threads : an execution environment consists of an address space, communication interfaces and other local resources such as semaphores: a thread is an activity abstraction that executes within an execution environment. Address spaces need to be large and sparce in order to support sharing and mapped access to objects such as files. An important technique for copying regions is copy-on-write. Processes can have multiple threads, which share the execution environment. Multi -threaded processes allows us to achieve relatively cheap concurrency, and to take advantage of multiprocessors for parallelism: Recent threads implementations allow for two-tier scheduling: the kernel provides access to multiple processors, while user-level code handles the details of scheduling policy. Distributed operating systems support reconfigurability by providing mechanism for port migration and location and multicast communication for the location of servers and resources. These mechanisms allow location and migration transparency to be achieved. The main software mechanisms for resource protection are capabilities and access control lists. Distributed operating system kernels provide basic message passing primitives and mechanisms for communication via shared memory. Higher-level services provide a variety of quality of service options: delivery guarantees, bandwidth and latency, and security. The chief overheads involved in an RPC that are candidates for optimization are marshalling, data copying, Packet initialization, thread scheduling and context switching, and the flow control protocol used. RPC within a computer is an important special case, and we will describe the thread management and parameter passing techniques used in lightweight RPC.
KEYWORD
Distributed Processing System
FullTexts / Linksout information
Listed journal information