StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Distributed Processing - Assignment Example

Summary
The assignment "Distributed Processing" focuses on the critical analysis of the student's answers to the questions in distributed processing. The middleware is defined as the layer of software which interacts between the operating system and the application the users interact with…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER93.4% of users find it useful

Extract of sample "Distributed Processing"

Distributed Processing Q3 a) The middleware is defined as the layer of software which interacts between the operating system and the application that the users interact with. It interacts with various subsystems like the application, operating system, external databases etc. It resides as a medium of interaction between two or more layers and also has a number of other functionalities which overcome a lot of issues as shown in the following points. The following points also highlight the different ways in which interfacing is done in these subsystems. Middleware is generally used for masking the distribution network that makes up a lot of systems. Every application as we all know is usually made up of a network of interconnected parts running at different locations. The various parts at different locations will be definitely heterogeneous since different OS and protocols may be employed in different systems. So this heterogeneity is hidden by the middleware. It provides a standard, high-level interface to all the software developers so that an application can be easily split and rebuilt and reused according to the requirements. The middleware can also overcome the problem of recurring functionalities in each subsystem. It can provide a set of common reusable services which can be accessed and used by the various subsystems it interacts with. So it avoids duplication of code which affects the overhead costs of running the application. Q3 b) Distribution systems are heterogeneous in most cases. A distributed computing system involves a number of machines networked and distributed across various locations to simplify and implement a job more easily. The incompatibility in distributed systems is mainly caused by the differences in the operating systems and standards and protocols of the various machines which form the network of the distributed system. Each machine may run under a different operating system and hence the major differences between Windows and Linux systems might cause a compatibility issue. Operating systems are kernel based in a Windows system whereas it differs in a Linux system where the core booting operations are separated from the kernel and kept separately to ensure that the system does not hang during the course of a process. The nature of networking is another issue that needs to be dealt with in order that the systems are compatible. Some systems may use a different networking protocol or medium say a Bluetooth and another may use an Ethernet hi-speed LAN cable. The protocols and ports that they operate on may be different and hence it may cause certain synchronization issues. Also another issue is when a problem crops up we need to analyze and find out which node is causing the problem and then troubleshoot at that node, which is very cumbersome and tedious. Java addresses these problems by having a Java runtime server which starts running on any machine and is compatible with any machine. So all kinds of OS related issues are ruled out. The interactions between the machines are now handled by this server and hence there is no communication problem. CORBA uses a similar technology but is heavier and complex since it tries to accommodate various operating systems and programs to rule out compatibility issues. Q5 a) There are various factor that affect the performance a of a client server system and all these factors are either related to the payload or the computing strength of the client or the server. The following are the various factors that affect the performance, application turns application payload network bandwidth, network round-trip time server compute time client These factors can be addressed and overcome by the following, Streamline the protocols to reduce the turn counts. Use the method of content caching to reduce the payload and overhead. Analyze and change the network protocols so that the entire bandwidth is improved and in turn the network round trip time is improved. Some functions can be offloaded to improve the server efficiency and computation time. Q5 b) The reliability of a client server system is vastly dependent on the criticality and redundancy of the nodes and sub nodes of the entire system. So the critical nodes of the system must be in shape and must be robust to meet the demanding traffic in the network. Some of the failures that may have an impact on the users are the failure of the multinode client server system. We generally assume that a single node system is in place and we go ahead on the reliability modeling and prediction. We need to consider multiple nodes and entities to make sure that we do not end up with a wrong reliability rating. If we assume that the failure would be only in one of the sub-clients or sub-servers then we may end up with a lower reliability that the true one. So basically we need to account for the criticality and the redundancy factors that go into the reliability of a client server system. The criticality factor basically looks at the survival of the clients and the servers and is important to the continued operation of the entire system whereas the redundancy factor accounts for the scenario where the redundant nodes may be used as a means of system recovery incase one of the nodes fail. So basically we need to find out which are the critical nodes and then make sure that they are provided with a backup incase they fail so that the reliability of the entire system is improved. Q1 a) Distributed computing is a method of computer processing where different parts of a program run at the same time on two or more computers that communicate with each through a network. The main agenda of a distributed computing system is to connect users and resources in an open and scalable manner. Centralized Processing is processing performed in one computer or in a cluster of computers at a single location instead of multiple locations as seen in distributed computing. Centralized processing is a very old system that evolved from the beginning of computers. Even today we can see centralized systems at work in our fully featured desktops. The disadvantages of a distributed system are that the unavailability of one node can cause disruption of the other nodes thereby decreasing the overall reliability of computations. Troubleshooting and diagnosing may also prove to be difficult, as the analyzing may require connecting to remote nodes or inspecting communication between nodes. Q1 b) The various trends such as management de-layering, empowering the workforce, business process reengineering, internationalization, outsourcing, just-in-time manufacturing, on-demand operation etc have actually changed the way businesses are being conducted every day. Each of these trends have made businesses more dependant on high speed computing and it requires that the various operations of the company at various locations across the world be in sync with each other in order to serve customers better and be more competitive. On – demand operation and JIT for example requires that the systems in the warehouse, production and delivery and receipt departments are all in the same page at any point of time in order that there is minimum wastage and minimum storage costs. So distributed processing is seen as the best solution to keep all parts of an international organization under a common control and also that the same up to date data exists across the organization. Some of the technologies like caching, RMI, CORBA are used to make distributed processing more acceptable to businesses. Java beans are another technology that combines business methods and routines with software programming so that a mesh of business and software code is evolved. Tools like web servers like Weblogic and Websphere are used to implement this where a central server is connected to different sub servers and the application is run throughout all the servers and is accessible to all the computers which are networked to it. References D. Swinehart, G. McDaniel, and D. Boggs, "WFS: A simple shared file system for a distributed environment". 7th ACM Symposium on Operating System Principles, Dec. 1979, pp. 9-17. P. Leach et al., "The Architecture of an Integrated Local Network". IEEE Trans. Selected Areas in Comm., Nov. 1983, pp. 842-857. S. Gukal, E. Omiecinski, U. Ramachandran, "Transient Versioning for Consistency and Concurrency in Client-Server Systems”. 4th International Conference on Parallel and Distributed Information Systems (PDIS '96), 1996. Read More
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us