Swipe, click the controls or use the arrow keys to explore this exhibit.
Swipe, click the controls or use the arrow keys to explore this exhibit.
Move leftwards to change topic or downwards to find out more.
Swipe, click the controls or use the arrow keys to explore this exhibit.
Move leftwards to change topic or downwards to find out more.
Change the look by clicking/tapping the top left corner!
Peer-to-peer networks, also known as Peer2Peer or P2P networks, are distributed networks of clients who are all equally privileged participants in the network.
Peer-to-peer networks, also known as Peer2Peer or P2P networks, are distributed networks of clients who are all equally privileged participants in the network.
Clients are called 'peers' or 'nodes', and each make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other participants of the network without the need for any intermediaries.
Peer-to-peer networks, also known as Peer2Peer or P2P networks, are distributed networks of clients who are all equally privileged participants in the network.
Clients are called 'peers' or 'nodes', and each make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other participants of the network without the need for any intermediaries.
These shared resources are required in order to provide the services and content offered by the network.
'Hybrid' peer-to-peer networks are peer-to-peer networks where a centralised entity is required to offer some parts of the network service.
'Hybrid' peer-to-peer networks are peer-to-peer networks where a centralised entity is required to offer some parts of the network service.
In 'pure' peer-to-peer networks, any single arbitrary node can be removed from the network without the network experiencing a loss in service.
'Hybrid' peer-to-peer networks are peer-to-peer networks where a centralised entity is required to offer some parts of the network service.
In 'pure' peer-to-peer networks, any single arbitrary node can be removed from the network without the network experiencing a loss in service.
This contrasts to a traditional client-server network where the 'server' is a centralised higher performance system offering content or a service to one or more lower performance 'client(s)' who only consume the services and do not contribute any of their own resources
The internet or ARPANET as it was then known, was originally designed with a peer-to-peer architecture. The Advanced Defence Research Projects Agency Network (ARPANET) was a project started in 1966 that laid the foundations for what would eventually become the internet. The goal of the ARPANET was to share computing resources around the United States, originally with the aim of providing redundancy for military computers in the event of a military base being destroyed. Theses goals lend themselves organically to the peer-to-peer architecture and the first few hosts on the ARPANET - UCLA, SRI, UCSB and the University of Utah were connected together as equal peers.
1 of 10
This early internet did not have much in the way of security as most of the early users of the internet were researchers who knew each other. Some of the first popular internet applications were Telnet (1969) and FTP (1971). Both utilised the client-server architecture, but due to the lack of security and partitioning of the early internet, most of the usage for these tools was still symmetric with every host able to Telnet or FTP to other hosts, and most servers also acted as clients and vice versa.
2 of 10
Usenet, first deployed in 1980, was one of the first applications to utilise a peer-to-peer architecture as we would understand it today. Originally, there were two hosts, one at the University of North Carolina and the other at Duke University. Users were able to read and post messages to a specific topic (or 'newsgroup') by connecting to their local host. Periodically, the hosts would use the Unix-to-Unix Copy Protocol (UUCP) to synchronise the Usenet content directory. As the network grew, so did the number of topics and messages and a dedicated TCP/IP protocol called the Network News Transport Protocol (NNTP) was eventually developed to efficiently discover and exchange messages and topics. A new Usenet server joins the network by setting up a news exchange with at least one remote server. Periodically, the news from the remote server is downloaded onto the local server and any new posts on the new server are sent to the remote server. The content then propagates through the network via other servers contacting the remote server.
3 of 10
Domain Name System (DNS) was an early example of a hybrid peer-to-peer network. In the early days of the internet, the way human readable domain names were mapped to an IP address was through a text file called hosts.txt that contained a series of domain name and IP address pairings, e.g. jbm.fyi would be paired with 3.250.195.229. Users would have to try to keep their hosts file up to date by regularly downloading the file off the internet. As the number of hosts on the internet grew, this quickly became infeasible.
4 of 10
DNS, released in 1983, established a hierarchical structure for domain resolution. Each domain has an authority, which holds the name server for every host on that domain. When a host wants to know the address for a particular domain, it queries the closest name server. If that name server does not know the answer, it queries a higher authority, and so on until the root name servers for the whole internet are queried. The root severs hold information about the Top Level Domains (TLD). TLDs are the last part of a domain name, such as .com. For example, for the domain example.jbm.fyi: the .fyi TLD would hold the address for the name server(s) responsible for jbm.fyi and this name server would hold the records associated with any subdomains at jbm.fyi such as example.jbm.fyi. Each level of the hierarchy caches the records it receives from more authoritative name servers to speed up the process for future queries. The network has been able to scale from the few thousand domains that existed in 1983, to the millions that exist today.
5 of 10
By the late 1990s, the number of internet users had grown explosively. Previously, the primary users of the internet were computer researchers and enthusiasts, but the 90s saw millions of ordinary people adopting the internet. Many of these new users were not interested in publishing or creating content and wanted to consume content on the net similar to how they would read a newspaper or watch television. Typically, users had only a slow connection and home computers were not very powerful. Combined with the difficulty of setting up two-way communications, these factors lead to the development of asymmetric network links and the adoption of the client-server model. Service providers were able to provision powerful servers that could meet the needs of the less powerful clients who did not need to have a permanent IP address and could be sitting behind firewalls or NATs.
6 of 10
The internet was originally designed on principles of co-operation and efficiency among researchers and enthusiasts. The mass appeal of the internet meant that many of the participants no longer believed in these original principles and this led to a breakdown of co-operation. In 1994, the first major spam attack occurred. A small law firm were able to post an advert on nearly every Usenet site. This became known as the Canter and Seigel "green card spam" and attracted disgust from the internet community, but quickly became common place. The advert was not paid for by the law firm and instead the users effectively funded it in the form of computing power.
7 of 10
The rise of this uncooperative behaviour led to security concerns and serious changes in how the internet operated. By default, a machine that could access the internet could also be accessed by any other machine on the internet. Network administrators quickly realised that normal users couldn't be trusted to take the security precautions necessary for this scenario and began to deploy firewalls allowing users to contact external servers, but not allowing external servers to contact the internal network. At the same time, the limited number of IP addresses meant it became impractical for every host to have one permanent address and dynamic IP addressing became the norm. Under dynamic IP addressing, clients would be assigned an available IP address by the ISP which would change every time they connected.
8 of 10
Network Address Translation (NAT) was also proposed as a solution to this problem. NAT effectively allows many users to share a single IP address by mapping a single public IP address onto multiple private IP addresses. NATs also act similarly to firewalls, by not allowing any communication with the external network without going through the NAT. This is how most home networks work: each client on the network is assigned an internal IP address such 192.168.0.2 and are able to communicate amongst themselves using this address. When a device wishes to contact an external address, they contact the NAT service (which is usually provided by home "routers") which makes the clients request on their behalf using the external IP address. When a response is received, the NAT maps it back to the internal IP address and forwards it to a client. Most ISPs now use a combination of NAT and dynamic IP addresses and many users are sitting behind many layers of NAT. These factors made the use of peer-to-peer networks significantly more challenging and lead to the decline of peer-to-peer networks in favour of the client-server model.
9 of 10
The era of modern peer-to-peer file sharing began in 1999 when Napster was introduced. Napster was a peer-to-peer file sharing program that enabled users to download files that were on other user's computers. Napster was primarily used for sharing MP3 files and at its peak had over 80 million users. The popularity of the software eventually led to legal difficulties with music artists suing for copyright infringement. Napster was a hybrid peer-to-peer network as it relied on a central indexing server (which eventually led to its downfall) but it paved the way for future peer-to-peer networking technologies.
10 of 10
Today, peer-to-peer networks are employed in a wide variety of applications:
Today, peer-to-peer networks are employed in a wide variety of applications:
Today, peer-to-peer networks are employed in a wide variety of applications:
Today, peer-to-peer networks are employed in a wide variety of applications:
Today, peer-to-peer networks are employed in a wide variety of applications:
Today, peer-to-peer networks are employed in a wide variety of applications:
BitTorrent, often referred to simply as torrent, is the most popular file sharing platform in use today. In 2019, the protocol was responsible for generating 2.24% of downstream and 27.58% of upstream internet traffic. BitTorrent was first released in 2001 by Bram Cohen and was designed to share large files efficiently over lower bandwidth networks. Rather than downloading a file off a single server, BitTorrent allows users to join a "swarm" of peers who they can exchange data with. A complete copy of the full file is called a "seed" and a host distributing the seed is called a "seeder". The file being distributed is split into "pieces" and once a peer has received a piece, it becomes a source of that piece for other peers. Peers usually download multiple pieces non-sequentially and in parallel, reassembling all the pieces into one file once the download is complete. Once a seeder has distributed all of the pieces of the original file to the swarm, the seeder can disconnect from the network, and the peers will still be able to construct the full file by interacting with other members of the swarm. Once a peer has obtained a complete copy of the file, it can act as a seeder. In this way, the task of distributing the file is shared only by those who want to obtain a copy of it.
1 of 5
2 of 5
Each piece is protected by a cryptographic hash function in the torrent file to protect from accidental and malicious modifications. Torrent files are text files that contain a description and metadata of the file they pertain to and information about the associated tracker. Trackers are servers that help to facilitate the efficient transfer of files by keeping track of where file copies reside on peers. When a user wishes to download a file via torrent, they must first obtain a copy of the torrent file which instructs their torrent client which tracker to interrogate to discover peers. The protocol does not include any mechanism to index torrent files, and this is left to "indexes" who are websites that provide a searchable repository of torrent files.
3 of 5
The BitTorrent protocol, as described above is a hybrid peer-to-peer network due to the reliance on trackers and this introduces a centralised point of failure. However, this has been alleviated with the introduction of "multitrackers" which enable a single torrent file to support multiple trackers and eliminated entirely with the introduction of "peer exchange" and "distributed trackers". Peer exchange or PEX allows peers to update each other directly about the state of the swarm without contacting a tracker but cannot be used to introduce a peer user to the swarm. Distributed trackers are used to introduce peers to a swarm and take the form of a distributed database built on top of a Distributed Hash Table (DHT). The only remaining aspect of BitTorrent that is centralised today is the use of torrent indexes to obtain torrent files. DHT databases can also be used to build an index of torrent files and is slowly replacing torrent indexes as the primary method for obtaining torrent files.
4 of 5
The BitTorrent protocol has been criticised for facilitating the transfer of illegal content such as copyrighted or illicit material. The fact that the BitTorrent protocol is entirely legal, and it is only the content that can fall foul of law, has not stopped many ISPs from blocking or throttling torrent traffic. As a result, many users take precautions such as the use of VPNs to prevent ISPs from detecting their torrent activity. The authorities frequently block torrent index and tracker server addresses and attempt to take legal action against their operators. As a result of this, operators try to stay anonymous. Torrent indexes are usually revived with a slightly different name after they have been blocked or moved to the dark web where they can only be accessed with special clients such as Tor. These factors motivated many of the efforts to further decentralise the protocol.
5 of 5
A blockchain is a peer-to-peer distributed database where every peer can guarantee that it has identical data to every other peer. Every transaction that occurs on the blockchain is executed and shared by every peer. Once a record has been accepted by the network, it is permanently, verifiably and immutably recorded on the blockchain. Blockchains were invented in 2008 by an unknown individual/group called Satoshi Nakamoto as part of their digital currency project culminating in Bitcoin.
1 of 11
The fundamental principles of blockchain technology are:
*Not all applications built on top of blockchain respect these principles.
2 of 11
New records are added to the blockchain through the following process:
3 of 11
Blockchains have 3 fundamental elements:
4 of 11
The blockchain is made up of an ever-growing number of sequential blocks, where each block is linked to the preceding block forming a chain. Each block contains a batch of data and a header. The data varies based on the application for the blockchain, but the header consists of 2 hashes and a timestamp. The first is the hash of the previous block and the second is a hash of the current block.
5 of 11
The chain can be verified by hashing the data inside the current block and ensuring this matches against the hash of the current block in the block header. Next, the data from the previous block is hashed and matched against the hash of the previous block in the block header. This is repeated moving back along the chain until the root block is encountered, at which point the chain can be considered verified. Since no two inputs to a hash function can produce the same output, if any of the hashes do not match their expected values, the data within the block has been tampered with or corrupted.
6 of 11
Hashes are one-way cryptographic functions that take an input of an arbitrary size and produce an output of a fixed size:
7 of 11
No single device should be able to control the whole blockchain and instead there are numerous machines or nodes who all maintain their own copy of the blockchain. There must be consensus among a majority of the nodes in order to update the blockchain. Once consensus is reached, a new block can be added to the chain.
Note: It is possible to be user of the blockchain without being a "full node". In this case a user would obtain the most recent block from the longest available chain and verify it with a number of other nodes in the network.
8 of 11
Verifiers have the job of selecting transactions from the transaction pool, grouping them into a block and propose this block as new block to the network. Before including a record in a block, the verifier must check that it is valid. Validity will depend on the exact application but could involve for example checking a sender has sufficient balance to make a transfer. Once the verifiers have selected blocks, they compete with each for their block to become accepted by the network. The two most popular mechanisms to settle the contest are Proof of Work (PoW) and Proof of Stake (PoS). Verifiers usually receive a financial reward if their block is accepted. For many blockchain systems, the same machines act as both nodes and verifiers.
9 of 11
Proof of Work was the original consensus contest proposed by Nakamoto. The process of computing PoW is called "mining" and participants are called "miners". The headers in PoW blockchains store an additional value called the "nonce". The job of miners is to adjust the value of the nonce so that the hash of the block header is less than a target value called the "difficulty target". The network adjusts the value of the difficulty target to maintain a constant rate. As hash functions are one-way, the only way to find the correct nonce value is brute force which is time-consuming. If any of the data in a block changes, the hashes in the block header would change as well and hence a new nonce value would need to be calculated. This principle secures the blockchain, as for a past block to be modified, an attacker would have to redo the PoW algorithm for all blocks after the modified block, and then have the final block accepted by the network. The probability of this happening continues to approach zero as the chain grows longer. Nakamoto described PoW as "One-CPU-One-Vote". The longest chain in the network represents the majority decision as it has the most CPU time invested into it.
10 of 11
PoW algorithms have been criticised for the massive quantities of CPU time (and electricity) wasted brute forcing the nonce, and the relatively small number of blocks that can be added to the chain per minute. PoS is an attempt to address these criticisms. In PoS, the probability of a verifier's block being accepted by the network is based on how big of a stake in the network they have. Under PoS two concepts protect the network:
11 of 11
Digital currencies, or cryptocurrencies were the original application for blockchain technology. In the years since, blockchain has found a wide variety of applications, but cryptocurrencies remain the most popular and valuable application. At the time of writing, the total market capitalization for the cryptocurrency market was over $2.5 trillion USD.
1 of 2
The most popular and first blockchain-based digital currency is Bitcoin, at the time of writing, the market capitalization of Bitcoin was over $1 trillion USD. Bitcoin is a global currency and payment system that enables users to transfer value without the need for intermediaries such as banks and outside the control of central banks, currency authorities and governments. Each user generates a public key which is used as an address and maintains a private key which is used to secure access to their "wallet". Bitcoin is a PoW currency and adjusts the difficulty target such that one block is produced every 10 minutes. The miner who finds the correct nonce is rewarded with a number of bitcoins.
2 of 2
Smart contracts are another revolutionary application for blockchain technology. Smart contracts are computer programs that are able to automatically execute, control or document legal events according to the terms of a contract. The earliest examples of smart contracts were vending machines. In 2013, Vitalik Buterin proposed the use of the blockchain as a decentralised global computer that could run a variety of applications, services or contracts.
1 of 4
Buterin created the Ethereum project which specified the Ethereum Virtual Machine (EVM) and a cryptocurrency called Ether which is used to pay for "gas". Gas is a unit that used to describe the amount of computation power required to execute a given operation on the EVM. Users specify how much Ether they want to offer per unit of gas and this is called "gas price". Ethereum uses PoW but the verifiers must also perform any necessary computation and update the state of the virtual machine which is recorded on-chain. Verifiers receive the gas price for each unit of gas they compute as a reward and hence are motivation to include transactions that have higher gas prices in their blocks. In practice, this means that users offering higher gas prices have their transactions processed first.
2 of 4
Smart contracts are programs written in EVM instructions recorded on the blockchain. There are compilers available for many popular programming languages that can compile down to EVM instructions. Once a contract has been "deployed" to the network, it is immutable and the EVM code can be inspected by anyone. Contracts are able to interact with other contracts and can hold a balance. There are now thousands of Decentralised Applications running on the Ethereum network ranging from simple savings contracts that allow a user to deposit funds to be returned at a later date, to more complex applications such as online casinos. In fact, digital currencies can be implemented as smart contracts and this has enabled the Ethereum blockchain to become home to thousands of other digital currencies.
3 of 4
Ethereum remains the most popular smart contract platform, many rival chains are now available. The majority of these chains also implement the EVM specification and hence smart contracts designed for the Ethereum chain can be deployed with minimal modification. Whilst other solutions, such as the Linux Foundation's Hyperledger take a different approach. These rival chains aim to offer additional features, alternative consensus mechanisms such as PoS, or faster transaction times.
4 of 4
Smart tourism can be defined as ‘Tourism agents utilising innovative technologies and practices to enhance resource management and sustainability, whilst increasing the businesses overall competitiveness’. Peer-to-peer networks can bring a number of benefits to the smart tourism sector.
1 of 5
Peer-to-peer networks offer the ability to form ad hoc networks for information exchange even in areas that lack network infrastructure. This could be used by tourists in remote or under developed destinations to communicate or locate attractions and services.
2 of 5
When abroad, many tourists need to obtain a supply of the local currency in order to participate in transactions. However, digital currencies could eliminate the need for a local currency supply. Service providers could accept digital currencies directly or exchanges could provide rapid conversion to a local currency (or digital representation). Many services such as ordering taxis or food are already provided via smart phone applications and adapting them to accept digital currency payments would be straightforward.
3 of 5
Blockchains could be used to help tourism providers such as travel agents, airlines, hotels, attractions and restaurants share data regarding tourists in a more intelligent, secure and privacy respecting way. This data could be used to enhance experience, by for example managing queue times, improve marketing and making travelling more convenient by ensuring that destinations like hotels already have the information they need.
4 of 5
Blockchains could also be used to eliminate travel documentation such as passports, visa and tickets entirely. These documents could instead be represented by Non-Fungible Tokens (NFTs) on the blockchain. As opposed to digital currency tokens, which are fungible, NFTs are tokens that are unique and not equivalent to any other token. They can be used to represent ownership of unique asset on the blockchain, such as a piece of art or a plane ticket.
5 of 5
Today's society as a whole largely reflects the client-server model with large corporations such as Google and Amazon providing services to people. As a result of this, the internet is largely controlled by a small number of powerful corporations who provide the physical infrastructure, routers and servers. The cloud server market is particularly worrying where just three companies (Amazon, Google and Microsoft) control 60%. This gives them a lot of control over what content appears on the internet, who gets the fastest connections and the ability to shut down servers. The internet has been constructed in a top-down approach. Peer-to-peer networks offer the opportunity to rebuild the internet with a bottom up approach where users are more in control of the content and data they share.
1 of 4
Similarly in the financial industry, a relatively small number of banks control access to most of the population's assets. Many financial opportunities and markets are not available to smaller players. Central banks and governments are able to manipulate markets and the economy through regulation and monetary policy and can inflate their currencies at will. Digital currencies offer the ability to hold an asset that cannot be manipulated through governmental pressures and that do not require intermediaries such as banks. Smart contracts are already beginning to offer many of the services traditionally offered by the financial industry which, in the realm of digital currencies are often referred to as "decentralised finance" or "defi". If these areas see mass adoption they could bring an end to government controlled fiat currencies and access to more opportunities for individual players.
2 of 4
The applications of smart contracts and indeed blockchain are not limited to the financial industry. There are already decentralised applications on the Ethereum platform with a variety of non-financial offerings such as games, casinos and cloud storage providers. Blockchain applications in a wide selection of industries are already being proposed. In the medical industry they can help hospitals and researchers share data whilst respecting privacy. For the energy industry, they can help improve market efficiency and provide better data and predictions. Privacy advocates propose a decentralised identity provider based on blockchain technology where the chain can vouch for who you are without releasing sensitive data to service providers. The property industry could use smart contracts to make buying and selling homes painless and more efficient. Blockchain is a young technology and these ideas are just the start, many of the blockchains finest applications may be beyond imagining for now.
3 of 4
Advances in modern wireless communication technology could give peers the ability to establish mesh peer-to-peer networks across wider areas without relying on traditional web infrastructure. This could be particularly useful for mobile devices such as smartphones or connected cars. Mass adoption of this type of technology could lead to a future in which there is a parallel global internet. This would be most useful if a standardised network stack similar to TCP/IP could be established for peer-to-peer networks. A common network standard would enable multiple peer-to-peer protocols and applications to use a single general purpose peer-to-peer network rather than a new network developing around a protocol as is common today.
4 of 4
Source material and references:
https://bookstack.jbm.fyi/books/principles-of-computer-communication-systems/page/peer-to-peer-networking