Introduction to Public-Resource Computing

From
Jump to navigation Jump to search

Public-Resource Computing (p-r Computing) relies on personal computers with excess capacity including free disk space as well as idle CPU time. The idea of using those unused resources was proposed in 1978 by the Worm computation project at Xerox PARC. They used 100 computers to measure the performance of Ethernet there. Many academic projects followed to explore this approach including Condor, a toolkit developed at the University of Wisconsin for writing programs that run on unused workstations, typically within a single organization. The World‘s computing power and disk space is no longer primarily concentrated in supercomputers. Instead it is distributed in hundreds of millions of personal computers and game consoles arround the world. This paradigm enables previously infeasible research while also encouraging public awareness of current scientific research and the creation of global communities centered arround a specific scientific interest.

Large-scale public-resource computing became feasible with the growth of the Internet in the 1990s. Two major p-r projects predate SETI@home:

- The Great Internet Mersenne Prime Search (GIMPS)

- Distributed.net (d.net)

More recent projects include folding@home and the Intel-United Device Cancer Research Project.

Several efforts are under way to develop general-purpose frameworks for p-r and other large-scale distributed computing, for instance The Global Grid Forum formed in 1999 is developing projects collectively. Private companies are also developing systems for distributed computation and storage in both public and organizational settings including Entropia, Platform Computing and United Devices.

Public-Resource Computing does not really belong into the category of peer-to-peer applications/networks because usually p-r projects rely on a central server architectur to produce work units and process the results. Also the clients usually do not communicate with each other at all. Nevertheless, as p-r projects rely on the power and resources of client computers to do the work there are some similarities with peer-to-peer networks.

For further reading: Public Computing: Reconnecting People to Science

Requirement for tasks in p-r computing

Task should involve a high computing-to-data ratio. A long computation causes low network traffic. This is necessary to keep server traffic at a managable level.

Applications should be capable of independent parallelism. Many data dependencies prevent an efficient and self-sufficient work of the clients.

Tasks should also be capable of tolerating errors. A client may produce an error and return a wrong result or a malicious user sends wrong results however the project should not be negatively affected by this.

--Ertelt 15:51, 31 Oct 2005 (CET)

back to SETI_at_home