What’s the difference between peer-to-peer (P2P) networks and client-server?

p2p vs client server

Introduction

In this post, we compare the client-server architecture to peer-to-peer (P2P) networks and determine when the client-server architecture is better than P2P. For those of you unwilling to spend a few minutes reading through the article, I’ll let you in on a spoiler – peer-to-peer is always better than client-server.  

To understand why, let’s first examine some definitions and history between the two models.

Client-Server Introduction

With the widespread adoption of the World Wide Web and HTTP in the mid-1990s, the Internet was transformed from an early peer-to-peer network into a content consumption network.  With this transformation, the client-server architecture became the most commonly used approach for data transfer with new terms like “webserver” cementing the idea of dedicated computer systems and a server model for this content.  The client-server architecture designates one computer or host as a server and other PCs as clients.  In this server model, the server needs to be online all the time with good connectivity. The server provides its clients with data, and can also receive data from clients. Some examples of widely used client-server applications are HTTP, FTP, rsync and Cloud Services. All of these applications have specific server-side functionality that implements the protocol but the roles of supplier and consumer of resources are clearly divided.

Peer to Peer (P2P) Introduction

The peer-to-peer model differs in that all hosts are equally privileged and act as both suppliers and consumers of resources, such as network bandwidth and computer processing.  Each computer is considered a node in the system and together these nodes form the P2P network.  The early Internet was designed as a peer to peer network where all computer systems were equally privileged and most interactions were bi-directional. When the Internet became a content network with the advent of the web browser, the shift towards client-server was immediate as the primary use case on the internet became content consumption.   

But with the advent of early file sharing networks based on peer-to-peer architectures such as napster (1999), gnutella, kazaa and later, bittorrent, interest in P2P file sharing and peer-to-peer architectures dramatically increased and were seen as unique in overcoming obvious limitations in client-server systems.  Today these peer-to-peer concepts continue to evolve inside the enterprise with P2P software like Resilio Sync (formerly bittorrent sync) and across new technology sectors such as blockchain, bitcoin and other cryptocurrency. 

With our definitions out of the way, let’s examine some challenges with Client-server networks.

Availability

The most obvious problem faced by all client-server applications is one of availability.  With a dedicated server model, the server MUST be online and available to the clients at all times, or the application simply will not work.  Many things can impact server availability from software problems, operating system errors and hardware failures.  Routing errors and network disruption can also impact availability.  In fact, with so many things that can go wrong (any one of which takes down your server – which takes down your application), it is little wonder that considerable time and resources are spent making servers highly available and trying to anticipate problems in advance. Specific departments such as Operations are often completely dedicated to the availability challenge and entire industries, such as Content Delivery Networks (CDNs) and Cloud Computing have been born to overcome the availability limitations of the client-server model, usually by allocating even more resources to the server-side of the model to ensure availability.   All of this adds complexity and cost as high availability demands that the system switches to a backup hardware or internet service provider if it’s disrupted for any reason for the application to continue to operate smoothly. This problem is quite complex since you need to keep data synchronized between your live server and backup server, maintain alternate service providers and properly plan software and hardware updates in advance to support uninterrupted service operation.

In a peer-to-peer network, each client is also the server. If the central machine is not available the service can be provided by any available client or a group of clients each operating as nodes on the network. The peer-to-peer system will find the best clients and will request service from them. This gives you service availability that doesn’t depend on one machine and doesn’t require the development of any complex high availability solution. 

If application availability is a challenge that keeps you up at night, you might be interested in learning more about an inherently highly available peer-to-peer solution for syncing and transferring enterprise data in real-time, by visiting the Resilio Connect product page.

In summary, peer to peer systems are naturally fault-tolerant and more available than client-server systems.  Client-server systems can be made highly available only with great cost and additional complexity.

High Load

Another recurring problem with client-server applications is high load or unexpected demand on the server. This is really a subset of the availability problem above, but one that is difficult to anticipate and expensive to solve. For the application to function properly in the client-server model, you must have sufficient capacity at the server to satisfy the demand of the client at any time.  The more popular the application becomes, the more clients that show up requesting access to the server.  Planning for the worst (unexpected demand) is a major challenge of the client-server architecture.  A single powerful client that consumes data faster than the others could consume all the networking, disk operation and server CPU. You want all clients to have access to the server. Therefore you need to limit clients to certain consumption levels, so each of them can get minimal server resources. This approach makes certain the powerful client won’t disrupt the other clients. But in reality, it usually means that the file server always serves a client in a limited way, even if it’s not overloaded and can operate faster which is an inefficient allocation of resources.

In the enterprise setting, solving high load usually means allocating more resources to servers, storage and infrastructure, like the network.  But when the application is not in peak demand (95%+ of the time) these additional resources are not needed and are, in fact, wasted.   Planning for increased load often means large capital expense projects to buy more storage, more network and more servers and may do little more than push the bottleneck to some other component of the system.  

By comparison, peer to peer architectures convert each node to a server that can provide additional service.  It has the property where every new user comes with additional capacity, helping to solve high load problems organically.   The problem of the powerful client consuming all of the resources in the client-server model is actually an asset in the peer-to-peer model, where this peer acts as a super node and is able to serve other peers at greater levels than the average node.

To put the differences between these two models in perspective, in 2008, the bittorrent network was moving over 1 EB (exabyte) of data every month.  At the same time, the most popular streaming site on the internet (no need to mention the name) was on a run rate to move 1 EB of data every 2.4 years.  One system uses the client-server architecture, the other uses a peer to peer architecture.   

And Netflix was still mailing DVDs.

In summary, peer-to-peer systems never suffer from high load challenges, and actually get stronger and more capable with increased demand.  

Scalability

Scalability means growing with your application, and it’s a real challenge with the client server model.    Everyone knows, enterprise data is not getting smaller and the number of files is always increasing.  If your company is growing, you are adding more users and more employees as well and all of this places increased demand on your servers.  Scaling the server infrastructure in response is also capital intensive in the same way as planning for peak load.  

Each server needs to be planned for the specific amount of clients it will support. When the number of clients grows, the server CPU, memory, networking, and disk performance need to grow as well, and can eventually reach a point when the server stops operation. If you have more clients than a single server can serve, you probably need to deploy several servers. This means designing a system to balance and distribute load between servers, in addition to the high availability system we discussed previously.

Scaling your infrastructure to handle larger data or more users means massive capital expense to increase server, storage and network infrastructure.   And again, you may find all you’ve accomplished is moving the bottleneck to some other component in the system.

In a peer to peer model, the more devices you have in a network, the more devices will be able to participate in data delivery and the more infrastructure each brings to the party. They will all participate in terms of network, CPU and storage, distributing this load from a central server. In this way, peer to peer systems can be considered organically scalable.  Meaning increased scale comes for free with increased demand instead of the massive capital expense projects inherent in the client server model.

In summary, P2P systems are organically scalable.  More demand means more supply making them ideal for applications that involve large data and/or many users/employees.

Below we cite some example applications that are common in the enterprise and describe how peer to peer architectures are ideal in solving each one.

Use Case: Build Distribution

Video games are getting larger.  Today’s gaming consoles are 4K capable and will soon be 8K capable, meaning there is no scenario where the amount of data that must be moved in the production of new titles goes down.  The problem is generalized to the entire software industry as platforms proliferate and build sizes increase.  Development companies struggle to deliver builds faster to remote offices across the globe, to accommodate the increasing trend of remote employees working from home or to distribute builds within one office to hundreds of fast QA machines on a LAN. In a client-server model, all remote offices will download builds from a build machine. The speed limitation will be determined by the network channel that serves the build to all remote offices. A peer-to-peer network model would split the build into independent blocks that could travel between offices independently. This approach removes network limitation from the main office and combines the speed of all remote offices to deliver builds faster. Usually, you can get 3-5 times faster on a peer-to-peer architecture than on client-server.

Another issue is distributing builds within a single office from a central server. The fast QA machines completely overload the central server network and CPU, bringing the centralized server to an unusable state. Utilizing a local server or storage array in each office is difficult to manage and the QA machines can still overload this server or storage array, rate limiting the distribution of builds and wasting development time.  Combined, this is an almost unsolvable scalability issue. As we discussed above, a peer-to-peer approach is better when many clients request access to the data. Each QA machine can seed the data to other machines, keeping the server in a healthy state and delivering builds blazingly fast.

Build Distribution
Click the image to learn more about our Build Distribution solution

Use Case: Data Delivery to Remote Offices

Delivering data to a remote office usually involves overloading the central server. Even if the speed of each office is fast, when you have many of them it adds up and requires huge bandwidth channels to the central office. A problem can occur when you need to deliver large volumes of data such as documents, video or images. The peer-to-peer approach solves that by allowing each remote office to participate in data delivery. This reduces the load on the centralized server, and significantly reduces central server and networking requirements. 

Furthermore, remote offices often have limited bandwidth to each office.  Moving the same data over and over again utilizing this weak link is wasteful and inefficient.  This is exactly what happens in a client-server model.

P2P software solves this naturally by finding the best source for the data, the best routing between data sources regardless if it is a local dedicated server or remote server. Once the block of the data is present in the remote office, it won’t necessarily be downloaded from the central data center again, saving precious bandwidth over the branch office link.

Data Delivery to Remote Offices
Learn how Blockhead maximizes productivity across multiple locations, using Resilio Connect

Use Case: Endpoint Management Systems

Patching and updating a large number of endpoint operating systems is a challenge that is particularly suited for P2P software.  The file sizes are large, for example, an update from Microsoft can be many GB in size, the pcs or hosts to be updated often number in the 1,000’s and the infrastructure to many branch offices or remote workers is often limited and unreliable.  Centrally serving updates in a client server model, makes no sense considering the limitation of the branch office internet mentioned in the section above.  Additionally, the update application is sporadic in nature, but peak demand for a critical update (such as a security vulnerability) can be intense.   Exposure to a vulnerability for even a day can be catastrophic.  Solutions like branch cache from Microsoft are examples of solving the availability challenge with more servers and more infrastructure.

Peer to peer distribution of these updates is far more efficient and much faster than traditional client server models.   The updates can be delivered at peak speed without a massive investment in server infrastructure that has to be managed for each branch office.  The solution is ideal for large enterprises like financial institutions, nonprofit and retail settings where many office locations must be managed and secured.

Busting common P2P myths


Myth #1:
P2P networks are only faster when you download from many peers

While the speed of the P2P network grows as more clients join the transfer. The point to point and distribution of data from one node to several nodes is faster too.

Myth #2:
P2P networks expose your network and computers to viruses, hackers, and other security risks

This misconception originates in the popular private P2P use case of illegal file sharing. Illegal file sharing and intellectual property theft expose your infrastructure and computers to all the problems. The risk for an enterprise P2P application does not exist, since all entities participating in distribution are secure enterprise machines.

Myth #3:
P2P networks are less secure than client-server networks

As we saw above, peer to peer is just a way to establish a connection and assign roles between machines. It does need additional security mechanisms that can perform mutual authentication and authorization, as well as access control and traffic encryption. However, these additional security features are built into enterprise P2P solutions like Resilio Connect.

How much faster are P2P networks vs. client-server networks?

We explored this question in a detailed whitepaper: Why P2P is faster. In a nutshell, P2P is always faster. How much faster depends on data size and scale. The larger they are, the bigger the differences. In the example in the article, client-server took 3X as long to send a 100GB file.

It’s important to remember that using P2P is not only faster, but also:

  • Provides significant savings on equipment and infrastructure
  • A more robust and resilient topology

Overview

In a peer-to-peer network, each client is also the server. If the central machine is not available the service can be provided by any available client or a group of clients each operating as nodes on the network. Therefore, peer to peer systems are naturally fault-tolerant and more available than client-server systems.

Related Posts

Schedule Demo

Step 1: fill in your details

On the next step you will be able to choose date and time of the demo session

Additional Resources

Read About Resilio Connect
Read Resilio's Case Studies

Resilio Connect for Server Sync

Resilient, Real-time, Low Latency Server Sync

Data-driven organizations trust Resilio to rapidly synchronize files across servers running a diversity of web and application workloads. Resilio’s high performance solution keeps all files current and accessible 24x7.
Learn More

点石阅读烤饼店起名革命英雄的故事以茶起名微信昵称我爱过你笑的脸庞李姓的怎么起名车牌选号潘姓男孩起名牛年宝宝起名网哪个好?态度娃娃内地已派核酸检测队员支援香港起名免费周易猪胎儿宜用字起名姓起名起名网曾姓起名二号首长4威海论坛党员民主评议个人总结聚美优品广告视频下载白俄罗斯总统再次持枪亮相姓慕男孩起名经典诗词孙悟空的名字是谁给起的马刺vs热火第六场错误1068龙凤胎名字 龙凤胎起名字大全景字起名含义六道骸屠夫小姐练歌软件宝宝起名大全2020年歼20紧急升空逼退外机英媒称团队夜以继日筹划王妃复出草木蔓发 春山在望成都发生巨响 当地回应60岁老人炒菠菜未焯水致肾病恶化男子涉嫌走私被判11年却一天牢没坐劳斯莱斯右转逼停直行车网传落水者说“没让你救”系谣言广东通报13岁男孩性侵女童不予立案贵州小伙回应在美国卖三蹦子火了淀粉肠小王子日销售额涨超10倍有个姐真把千机伞做出来了近3万元金手镯仅含足金十克呼北高速交通事故已致14人死亡杨洋拄拐现身医院国产伟哥去年销售近13亿男子给前妻转账 现任妻子起诉要回新基金只募集到26元还是员工自购男孩疑遭霸凌 家长讨说法被踢出群充个话费竟沦为间接洗钱工具新的一天从800个哈欠开始单亲妈妈陷入热恋 14岁儿子报警#春分立蛋大挑战#中国投资客涌入日本东京买房两大学生合买彩票中奖一人不认账新加坡主帅:唯一目标击败中国队月嫂回应掌掴婴儿是在赶虫子19岁小伙救下5人后溺亡 多方发声清明节放假3天调休1天张家界的山上“长”满了韩国人?开封王婆为何火了主播靠辱骂母亲走红被批捕封号代拍被何赛飞拿着魔杖追着打阿根廷将发行1万与2万面值的纸币库克现身上海为江西彩礼“减负”的“试婚人”因自嘲式简历走红的教授更新简介殡仪馆花卉高于市场价3倍还重复用网友称在豆瓣酱里吃出老鼠头315晚会后胖东来又人满为患了网友建议重庆地铁不准乘客携带菜筐特朗普谈“凯特王妃P图照”罗斯否认插足凯特王妃婚姻青海通报栏杆断裂小学生跌落住进ICU恒大被罚41.75亿到底怎么缴湖南一县政协主席疑涉刑案被控制茶百道就改标签日期致歉王树国3次鞠躬告别西交大师生张立群任西安交通大学校长杨倩无缘巴黎奥运

点石阅读 XML地图 TXT地图 虚拟主机 SEO 网站制作 网站优化