Peer-to-Peer and Client-Queue-Client Architecture
Client-Server Architecture Compared with Peer-to-Peer and Client-Queue-Client Architecture
Client-Server Architecture, which is the model for Internet communication, separates client programs/machines from server programs/machines. These endpoints communicate through a network and are also known as called nodes, which are any device connected by a network. The clients send requests to the server every time it is instantiated. Client instances occur when an object or individual unit of runtime data storage is called into action from its abstracted class.
It is the instances that perform the specific work of requests. Requests are messages between objects and are sent to connected servers. Servers accept, process, and return requested information to the client over the same network that the request was sent. This interaction is described using sequence diagrams that are standardized in UML (Unified Model Language), which provides a simple visual for workflow and object modeling.
The most popular types of clients are web browsers. Web browsers work over the World Wide Web, WAN (Wide Area Network), and LANs (Local Area Networks). They allow users to interact with text, images, and other media (video, audio) on a web page and allow for navigation between pages or sites through hyperlinks. HTML (Hypertext Markup Language) formats web pages to customize appearance. The most popular web browsers for personal computers are Internet Explorer, Opera, Safari, Mozilla Firefox, and Netscape.
Servers come in different types as well. Some examples are web servers, database servers, and mail servers. Clients typically interact with users through a GUI (Graphic User Interface). They initiate requests and wait for responses. They are active, as opposed to servers, who are passive. Servers do not usually directly interact with users. There is a level of middleware that translates and manages requests between clients and servers on different platform or written in different languages.
Client-Server models share characteristics that will facilitate our discussion of alternatives.
A Client-Server Architecture that contains only a client and a server is known as a 2-Tier model. However, there are Multi-Tier models that contain more levels. Usually a 3-Tier model has some sort of middleware between the client and the server. 3-Tier models have three nodes: clients, application servers that process data for clients, and database servers that store data for application servers.
Multi-Tier or N-Tier models are more scalable because they distribute processing over multiple nodes. This improves reliability and performance through simultaneous load processing over several server nodes. However, N-Tier Client-Server models contain some disadvantages as well.
For example, the network must manage a greater load in general due to increased traffic. They are also more difficult to program because there are more devices that must communicate smoothly with each other.
Two alternatives to Client-Server Architecture are Client-Queue-Client and Peer-to-Peer Architecture. This article will describe each and then compare them to traditional Client-Server Architecture.
In Client-Queue-Client Architecture all endpoints, including servers, are simple clients. The server is located in external software. Client-Queue-client Architecture is also referred to as passive queue Architecture. It was developed to expand on traditional Client-Server Architecture. It developed from trying make one client a server for different clients, thereby multiplying potential usages for clients. Perhaps Client-Queue-Client Architecture is better understood through example, since it essentially follows the same logic as the Client-Server model described above.
For example, two web crawlers instantiated on two different machines can query each other if URLs (Uniform Resource Locaters) are indexed and known to the other machine. Web crawlers are programs or automated scripts that search the web methodically.
Search engines use spidering to make searches faster. Spidering is the technique used by web crawlers to copy pages visited during a search for later processing by a search engine. These downloaded pages are then indexed.
Client-Queue-client Architecture uses a passive queue to allow client instances to communicate directly with each other and refine their request from servers. Servers act as passive queues, which can be, for example, a relational database, also central to traditional Client-Server networks.
A passive queue allows one software instance to pass a query to another software instance. This last instance then communicates the query to the passive queue (database) and retrieves the response data in a scheduled manner. This response is transmitted to the server, which transmits it back to the original client instance and answers the client response. This Architecture developed to simplify repeated software implementations. It evolved into Peer-to-Peer Architecture, but is practically obsolete today.
Peer-to-Peer (P2P) Architecture
Peer-to-Peer Architecture distinguishes itself by its distribution of power and function. Rather than concentrating its power in the server, Peer-to-Peer models rely on the power and bandwidth of participants. They form ad hoc connections between nodes for sharing all kinds of information and files. Peer-to-Peer discards hierarchical notions of clients and servers (clients at the top, servers on the bottom) and replaces it with equal peer nodes that function simultaneously as clients and servers. This also discards the idea of a central server, which exists in Client-Server Architecture.
There are several classifications of Peer-to-Peer networks. These include pure/hybrid and structured/unstructured Peer-to-Peer networks. Pure P2P networks merge the role of clients and servers as equals and do not provide a central server for managing the network or a central router that forwards requests to other networks.
Hybrid P2P models, on the other hand, do contain a central server that stores peer information and responds to request for information stored on that server. In this configuration, peers host available resources since there no central server provides this function.
Peers also make central servers aware of what resources they want to share and make those resources available to peers that request them. Also, route terminals function as used addresses and are indexed to find an absolute address.
The structure of P2P networks is determined by the nature of the overlay network, which consists of all participating peers as equal nodes. Nodes in an overlay network are connected through virtual or logical links that create a path to the underlying network.
Essentially, overlay networks are network built on top of other networks. Peer-to-Peer networks are considered overlay networks because they are usually built on top of the Internet. Structured P2P networks use a global protocol so that searches can be routed TO any peers/nodes BY any peers/nodes on the network.
To retrieve rare files, more structured overlay links are required. The most common structured P2P network is the distributed hash table (DHT). DHTs are decentralized distributed systems that store names and values. Any participating node in the network can lookup and retrieve values. Maintenance of the DHT mapping is distributed among the nodes. The ownership of each file is assigned to a peer, but the addition or deletion of peers or files doesn’t cause major disruptions. This makes them very scalable.
Unstructured P2P networks establish links more arbitrarily. To join, a peer only has to copy the links of an existing node and then add its own links as it develops. To find a desired file, however, the request must be flooded throughout the network. This doesn’t always necessarily return the desired results if the file being requested is rare. There is no correlation between peer and content. Also, flooding increases network traffic, slowing down responses and file sharing.
The primary advantage of P2P networks is that all clients contribute their resources. These resources include computing power, bandwidth, and storage space. In traditional Client-Server models there are a fixed number of servers, so the addition of clients slows down network processing. In Peer-to-Peer models, as nodes are added, system resources increase (contributed by the added nodes) to accommodate demand.
Client-Server Architecture provides certain advantages over these other network models. For example, Client-Server models offer easier maintenance, security, and administration. For example, encapsulation makes it possible for servers to be repaired, upgraded, or replaces without clients being affected.
Encapsulation is the process by which an object can hide its data and methods without revealing them to users. Also, because all data is stored on servers, the data is more secure. Servers control access and ensure that only screened clients can access and manipulate data. Again, since data is centralized on servers, updates occur on the server and are then transmitted to clients as they request services.
In P2P models, updates must be applied and copied to peers in the network, which requires a lot labor and is prone to errors. However, Client-Server paradigms often suffer from network traffic congestion. This is not a problem for P2P, since network resources are in direct proportion to the number of peers in the network. Also, Client-Server paradigms lack the robustness of P2P networks. Robustness refers to a network’s ability to bounce back or continue functioning if one of the components fails. If a server fails in Client-Server models, the request cannot be completed. In P2P, a node can fail or abandon the request. Other nodes still have access to resources needed to complete the download.