Master 14, a very rough version of the Peer-to-Peer Economy Proposal

Status: Working Draft $$Revision: May 13, 2014 $$brent.shambaugh AT GMAIL dot COM | Previous Versions: http://bshambaugh.org/P2PWebOS.html | Conceptual Implementation: EISPP Wireframes


P2P World-OS: A P2P Enterprise Platform

Introduction

We need a system that encourages learning and creates business and research opportunities through self-organization. The traditional model is failing us. We must go beyond the traditional model and personalize education, business, and research with self-organization, so that individuals can contribute their own ideas and work together toward common goals.

The current state of things is abysmal (questionable?). According to the Brookings Institute(1), "Educational Institutions are overly bureaucratic and poorly equipped to train students for newly emerging jobs." Similarly, The Economist says (2), "Too many colleges it seems, still fail to align themselves with the needs of local employers, a mismatch that is bad both for the employers and for the potential employees, though arguably universities are even worse at doing this."

Self-Organization and personalization as a new way of thinking may hold promise. According to, Sir Ken Robinson, "schools are geared towards the needs of industrialism instead of the things that kids like"(3). He says the things kids like, which if nurtured may hold promise, are instead replaced with degrees that are increasingly worth less. Sugatra Mitra adds to this by saying that "schools are not broken but obsolete"(4). He explains that we have moved beyond the need of a system that encourages conformity and interchangeability of trained people to the need of a system of learning through self-organization.

Sugtra Mitra saw evidence that learning through self-organization works through placing of unattended computers in India. The children taught themselves, and all they really needed was a little encouragement. Thus, he theorized that all that was needed for learning was broadband, collaboration, and encouragement. This theory was developed into the SOLE (self-organized learning environment challenge).

In addition to Sugtra Mitra's project, education is increasingly becoming personalized and offered at a distance. The CATO institute says that, "A new world of transnational education may be upon us. As with many other industries, education is increasingly unmoored from its domestic roots."(5) Sites like Coursera, MIT OpenCourseware, and Udacity offer an introduction. The move of Georgia Tech to offer a $7,000 Masters Degree in Computer Science through Udacity gives the option more weight.

The basic premise for a system that encourages learning and creates business and research opportunities through self-organization may be summarized by Sir Ken Robinson, "Education is meant to take us this future that we can't grasp ... nobody has a clue ... what the world will look like in five years time ... all kids have tremendous talents and we squander them, pretty ruthlessly ... I have a contention that creativity now is as important in education as literacy" (3).

Executive Summary:

A peer-to-peer economy is proposed to encourage entreprenuership and learning through self-organization. The necessary components, with background research for future development, are predicted. While not all the components may be used, it is expected that at some point a peer-to-peer economic platform will be developed for use on a variety of internet enabled devices.

The Sensorica project, with its value networks, forms the backbone of this work. It is expected that extensive collaboration with them will occur in the future. With a peer-to-peer model and value networks it appears possible to allow any person to freely participate in any project in an open source fashion whether it be based on software, hardware, or anything that can be contextualized in a value network.

Perhaps the main challenge will be implementation as the needed technologies have been in development for nearly a decade, and are maturing rapidly. A component of this challenge is enabling people to contribute to development of value networks without being a hassle in terms of learning or in terms of effort. Another component of this challenge is enabling people to receive remuneration for their work without distrupting others ability to create.

Adoption on a wide scale may prove to be difficult. Those who appear to be leaning towards acceptance of peer-to-peer economy ethos are those from the hackerspace, makerspace, peer-to-peer, open source, and select research communities.

Peer-to-peer is both decentralized and distributed. There are no centralized organizations or servers and each peer has the ability to participate. All peers can either contribute value or receive value when working in a peer-to-peer fashion. With a Sensorica value network, each peer has the ability to voluntarily join a hierarchy with other peers to work on a project, and then disband when the project is completed.

Remuneration is not explicity dealt with in a value network, but it provides a framework for compensation. Other technologies like Ripple and Bitcoin in conjunction with the MNDF Project are being considered for payment.

Ideally, the peer-to-peer economic platform will allow for the creation of virtual teams / "companies" that will be connected to other virtual teams by means of their value networks. Potentially the system that evolves with the platform could also be an improvement or even replacement of existing means to manage intellectual property. Instead of hoarding and protecting intellectual property with patents, sharing through interconnected value networks may become the norm. Such sharing could be advantageous since the potential interconnectedness could allow value networks producing products to make themselves available to a wider audience than they could reasonably attain by themselves. Renumeration could be obtained by other value networks channelling profits to other connected value networks or through each value network selling their own finished product and keeping all profits to themselves.

Related to the peer-to-peer ethos is the culture of the Maker. Chris Anderson in his "Ten Rules for Maker Businesses" (6) mentions being as open as possible, creating a community of support, and marketing as a job in some of his rules. A peer-to-peer economic platform could embody all of this. According to Anderson, openness is important for allowing others to use designs, to build trust, community, and potentially a free source of labor. The risk of people stealing things is considered acceptable for the greater benefit of achieving the latter. Community is a very important part of it, and some of this community may be recorded by a value network. In addition, the platform achieves marketing goals as it allows people to see linkages through linked data, formed from the interconnected value networks, in addition to the traditional textual descriptions apparent in social networks. This linked data, by design, allows people to find things that are very close to what they are looking for, so they could for example submit a linked data version of their resume and obtain things that may be very close to their resume.

A goal of the platform would be to allow instances of it, at least in an abbreviated form, to run on virtually any hardware device. Surely there would be different components that could be run in conjunction with it that would be better suited for some devices than for others. One example may be engineering tools used for fluid dynamics and computer design. These surely would not be suited for some cell phones, but would be useful for designing a product. Cell phones, however, could be useful for pushing data to value networks.

A Peer to Peer Economy

How should such a model with self-organization be developed? Perhaps with a P2P (peer-to-peer) economic model. At its core, I believe a P2P economy should allow for software and device interoperability, value networks, P2P computing, a P2P monetary system, a means to manipulate, create, maintain a history of changes, control, access, secure, transmit, browse, query, display, and aggregate data, and love and the real world.

Software Interoperability

Software should work together so that people may use whatever software fits their needs best. One such attempt at making software work together is the ISO 15926 standard. This standard, developed for the capital projects industry, aims for interoperability amongst information and thus amongst software platforms. This means, for example, that I can design an object in a CAD (Computer Aided Design) program of my choice and another person can view this project in a CAD program of their choice. In addition, someone may want to use my CAD program in other program that is not by itself a CAD program. However, I may not want CAD at all. I may just want my business software to work together with theirs without lengthy exchanges. ISO 15926 does this too, with extensiblity if needed. ISO 15926 is decribed more deeply in "Introduction to ISO 15926"(7). Moreover, an open source set of tools implementing ISO 15926 called IRING (8) is available for download.

An exploration of ISO 15926 with an open source platform occurred with the Simantics Project. This project developed a software platform for modelling and simulation that allows software packages such as OpenModelica, OpenFOAM, Paraview, Code_Aster, Fluent®, Apros®, and Balas® to be integrated as plug-ins (9, 10). With semantic web technologies included (11,12) different software packages can be made interoperable. This allows for consistency between the various tools to solve a problem.

Device Interoperability

All devices should be able to work together. This includes everything from another computer to a 3D printer. One known example of device interoperability is BlueTooth. Another is sensor network deployments. In the paper, "Semantic Sensor Data Search in Large-Scale Federated Sensor Network" (13), multiple sensors data schemas are expressed in a common ontology so that they can be treated as one federated sensor network that can be queried over. Interoperability among devices, such as with these sensors, would be ideal for a peer-to-peer economy. Following the logic of "Introduction to ISO 15926", if no common bridge (common ontology) was agreed on beforehand, participants adding their devices in self-aggregating systems would have to work out these details each time a new member joined. In a broader sense, sensors as well as actuators are used for smart cities, as outlined by Dave Raggett of the W3C (14).

Value Networks

According to the P2P Foundation website, "Value networks (value webs) are complex sets of social and technical resources that work together via relationships to create value in the form of knowledge, intelligence, products, services, or social good." (15) Their rationale was described by Verna Allee in a book about value networks: "Social technologies support conversations, but are disconnected from work flows and performance goals. Traditional organizational hierarchies, rigidly designed work processes, and operational performance metrics often work against collaboration. People need new ways to organize quickly to ever changing, complex work environments." (16)

Wikipedia (17) describes the basic structure of a value network as a graph with nodes and edges. Nodes in a value network represent people (or roles), and edges represent interactions such as tangible and intangible deliverables. Tangible deliverables could include revenue, services, invoices, request for proposals, etc. Intangible deliverables include the categories of knowledge and benefits. Knowledge can be policy development, technical know-how, process knowledge, etc. Benefits could include things such as favors offered to another person (e.g. emotional support) and presitige for volunteering time.

Usually value networks are used as an analysis technique rather than a dynamic way to keep track of the relations in an organization. Sensorica goes beyond the traditional value network definition and provides a way for multiple individuals to record relationships with open and decentralized software (18, 19). The three main parts of a Sensorica network are a value accounting an exchange system, a role system, and a reputation system. In the order listed, these systems allow members to record their work, assign roles to members using the value acounting and exchange system and member input, and view how members are viewed by other members. In addition, some form of visual analysis can be used.

The beautiful thing about value networks in Sensorica is that they allow for voluntary subordination based on the role system. This means that hierarchies form voluntarily. Moreover, the design is built for self-organization. Of course, Sensorica is not limited to this. Sensorica networks can link to other Sensorica networks allowing for the record of multiple groups in interaction. It is also interesting to note that semantic web technologies have been applied to Sensorica through the REA ontology (20, 21, 22). This suggests that it may be possible to integrate with ISO 15926, making it possible for value networks to intimately work with engineering and business actions in software. (also investigate Cambia's Karmeleon)

Peer-to-Peer Computing

Peer-to-peer computing began with an emphasis on file-sharing. Napster was the first widely used file-sharing system (23). Gnutella, Bittorrent, Kazaa and others followed later. A great emphasis was placed on anonmity (23). However, some saw that breaking this anonymity was beneficial to performance. Tribler (24) and INGA (25) went against total anonymity and developed social overlays so that individuals could find peers that had similar interests to themselves. In the case of Tribler, people could more easily find files to download through combining metadata caches with an epidemic protocol and dowload these files at a higher speed using helper peers. Using a social overlay solved peer-to-peer research challenges of centralization, system availability regardless of whether certain peers were availabile, integrity and trust to clean up polluted data and choose trustworthy representatives, incentives to induce cooperation and achieve good performance through such things as social recognition, and achieved network transparency to combat the many hurdles of communicating on somewhat restricted networks. All of this can be attributed to people self-organizing into trusted groups based on interest. These arguments also suggest the value of value networks which appear to resemble social overlays. INGA also relies on self-organization through shared data.In addition, with INGA, another benefit was revealed. The semantic information used decreased the number of queries to find other peers over those without additional semantic information, such as Gnutella.

Perhaps self-organization through semantic relations could be applied to topology management mentioned in T. T. Nguyen et. al's paper on high performance peer-to-peer distributed computing (26). This is the only component that appears to be centralized. If this was applied to file-sharing as well as high performance computing, than something very close to a pure peer-to-peer research environment may be created. This could be a boon to creativity. There would be no need for someone to ask permission to try out a half-baked idea only they and a few others could understand. People may be able to use their own computers. In addition, if network costs were an issue, mesh networks could be considered. Virtual research groups might form spontaneously. Of course things like massive amounts of data causing both storage and bandwith issues, and access to relevant literature could present a problem.

Sets of equations are most frequent in real world situations. These are usually solved with matricies and vectors (27). HAMA (28) is a distributed environment based on MapReduce (29) that allows for solutions by this method. One advantage of MapReduce, according to the HAMA paper is, "For various applications running in an iterative way as well as the CG method, MapReduce is often better with respect to scalability compared to other computation alternations such as MPI and OpenMP" (28). Unfortunately, they point out that MapReduce will not work for every algorithm. A solution to this is their flexible computation engine interface.

HAMA is not peer-to-peer. A peer-to-peer implementation of MapReduce was developed by Marozzo et al (30). Perhaps this implementation could make HAMA peer-to-peer. This would allow for better adaptation to peers coming and leaving the network than conventional MapReduce, since failure of a master would not mean failure of the system. However, for smaller numbers of nodes, there would be more network traffic than conventional MapReduce.

In order to make MapReduce work in a peer-to-peer fashion, it is likely that a protocol that allows for asynchronous message passing as a communication method would need to be used. TCP, a popular protocol used for the Internet, supports only streaming (26). P2P-MapReduce relies on the JXTA protocol which uses asynchronous message passing (23,30,31). This protocol can of course be changed. In fact, dicussuion of changes to the JXTA protocol for peer-to-peer was included in a paper introducing P2P-MPI (32,33). Another protocol is the protocol used in T.T. Nguyen et al.'s paper (34), which supports both synchronous and asynchronous communication. They add that in large distributed networks, asynchronous modes give the best performance. Efforts for self-organization may work better on large distributed networks such as the internet. Of course, the preference of mode of communcation is not without consequence. Further explaination is available in the book, Parallel and Distributed Computation: Numerical Methods by Bertsekas ans Tsitsiklis (35).

Another thing to consider for peer-to-peer computation, perhaps to greater effect, is virtualization. JXTA creates a virtual network with a platform independent Open Source Protocol (36). T.T Nguyen's scheme (26) may also support virtualization since the use of an OS image is mentioned. For security purposes, such virtualization would likely be important.

The use of computing resources in peer-to-peer computation will need to be accounted for in value networks. In addition, one may want to find the lowest price for computation, especially when venturing out of a core value network. The paper, "Economic Models for resource management and scheduling in Grid computing"(37) accounts for resource management issues with a computational economy framework utilizing trading and brokering services. It also describes the development of a computational economy, called Nimrod-G, which was used on the World-Wide Grid testbed spread across five continents. Further review of peer-to peer computation is warranted as this is a very broad and complex subject area.

Peer-to-Peer Monetary System

Supporting rationale for a peer-to-peer monetary system comes from Douglas Mark Rushkoff. In his dissertation titled, "Monopoloy Moneys: The media environment of corporatism and the players way out," (38) Ruskoff speaks of centralized and decentralized curriencies. Centralized curriencies are prevalent today, and decentralized curriencies were present during the late middle ages. He believes that centralized curriences are biased toward the central authority of its issuers, lead to scarcity and hoarding, lead to concentration of wealth at the top, depend on an ever growing economy, and encourage competition and individuality. Conversely, he believes that the decentralized curriencies of the middle ages, based on IOUs for stored grain, promote spending, local interaction, cooperation, and community projects. He found that during the middle ages both decentralized currencies, based on grain, and centralized currencies based on valueable metals led to a strong economy. Decentralized, or what he called local currencies, were good for most daily activities, while centralized curriences were good for trade. This combination led to a vibrant economy, where everyone had plenty. Later, when strictly centralized currencies were enforced, the quality of life decreased to such an extent that Rushkoff believes that centralized currencies were a primary cause of the Black Death.

The state of the economy today is certainly worrisome. Could we be experiencing some of the same effects that occurred around the middle ages? London Financial Times writer Martin Wolf, in a quote popularized by Noam Chomsky, concludes that, "an out-of-control financial sector is eating out the modern market economy from inside, just as larva of the spider wasp eats out the host in which it has been laid" (39). This certainly an extreme statement, but it could be worth considering. Actually, he was outlining the view that our financial system has been taken over by risk junkies, who are mostly bank managers and short-term shareholders that do not seem to be creating real value. However, following Rushkoff's logic it might be an effect of centralized currency.

Concerning this proposal, the study of the economy, money, and centralized currency is far from complete. Research about Woodrow Wilson, Alexander Hamilton, Andrew Jackson, and Abraham Lincoln, and the banking system in the United States might be of merit. Some light research suggests that money in the american colonies was scarce. Commodity-money such as corn, cattle, and animal pelts were used in the colonies due to shortage of money. Paper money was also used to pay the bills, even though it was prone to inflation. The Massachussets Bay Colony was the first in western civilization to produce government-backed paper money. Other colonies soon followed, issuing IOUs to soldiers in the form of paper notes to pay soldiers. Of course, the British were not happy about this (40).

Currency was eventually nationalized. The United States went through a series of national banks issuing notes (40). The latest is the Federal Reserve. The notes are backed by faith in the issueing party as well as entanglement, such as agreements with the Middle East and China due to their large holdings of U.S. Government securities.

If centralized currency is a worry, there are a few developing technologies that could be considered. One of these is Payswarm, which has hopes of becoming the universal payment standard for the world wide web. Another is Ripple, which hopes to form decentralized trust networks. Yet another is Bitcoin, which is one of the first crypto-curriencies.

Payswarm is an intiative led by Digital Bazaar. Their homepage, payswarm.com says that, "the payswarm web platform is an open standard that enables web browsers and web devices to perform micro payments and copyright-aware, digital media distribution." It is not a bank or a currency. It is a way to federate existing and future payment systems on the web. People can label data with particular attributes and instructions regarding use and payment actions. In addition, it is hoped that people will use it as a front end to many payment systems.

Ripple is a project that was started by Ryan Fugger in 2004. Its goal is to work with trust networks, which are a decentralized means to route IOUs through nodes that are willingly indebted to each other. These nodes are generic. They can be things such as people or organizations. Each node acts as its own bank, accepting both debits and credits. True to its name, the project aims to transfer IOUs like ripples in a pond. Transactions starting at one node, then passing through nodes that are willingly indebted towards a desired destination node, will not change the total indebtedness of any node. The debt will remain within each nodes circle of trust.

Literature revolving around the initial Ripple implementation is available at archive.ripple-project.org. Among the contents you will find version 2 of the original Ripple paper titled, "Money as IOUs in Social Trust Networks   A Proposal for a Decentralized Currency Protocol" (41) as well as a recent study by Yahoo! Research titled, "Mechanism Design on Trust Networks" (42). A more recent implementation of Ripple by a company called OpenCoin is available at Ripple.com. In May of this year, OpenCoin received investment from Google Ventures (43). Lastly, Ripple could be compared to WebCredits (44), a project involving IOUs from Melvin Carvalho and others.

Bitcoin (45) is a peer-to-peer crypto-curriency that was described by Satoshi Nakamoto in 2008, and introduced in 2009. It is "based on cryptographic proof instead of trust", which allows "any two willing parties to transact directly with each other without the need for a trusted third party". This avoids the mediation disputes that occur with third parties based on trust, which leads to greater transaction speeds and transactions of smaller denominations. Each electronic coin in Bitcoin is a chain of digital signatures. Coins are prevented from being spent twice through a peer-to-peer distributed timestamp server that uses hashes. These hashes are known as the proof-of work. The proof-of-work cannot be corrupted as long as honest parties control more CPU power than dishonest parties. All of this is maintained by broadcasting messages containing transactions between nodes of each party. This broadcasting is further encouraged by allowing transaction fees and allowing a special transaction that corresponds to the creation of a new coin. Bitcoin is described in greater detail in Satoshi's paper: "Bitcoin: A Peer-to-Peer Electronic Cash System" (46), weusecoins.com, as well as on Wikipedia (47).

One disadvantage of Bitcoin is that it is the digital equivalent of gold. This makes it behave somewhat like centralized currencies during the middle ages. Its value increases over time, which can bias it to hoarding instead of spending. Another worrisome aspect could be the legal side, but this was discussed by FinCEN (48) which is a bureau of the U.S. Department of the Treasury. Basically, there is no complication to purchase Bitcoins or spend Bitcoins, but other actions must be reported.

These technologies could be combined in multiple ways with Payswarm, and be used with value networks. First, Payswarm could be combined with Bitcoin, forming a mostly peer-to-peer monetary system. The main exception is exchanges. Second, Payswarm could be combined with Ripple in addition to some virtual currency if desired such as Bitcoin and XRP, a fiat currency depending on how much peer-to-peer quality is desired, or some other representation of value. Third, Payswarm could be combined in some combination with the traditional monetary system.

In addition, the MNDF project (49) proposed earlier could be considered. This project has the aim of allowing funds to be distributed evenly amongst contributors, even if they are in different teams or value networks. It would likely draw upon Payswarm and Ripple, or perhaps Bitcoin. Manu Sporny, of Digital Bazaar, showed interest in it. Putting this together would hopefully allow teams to to have the resources they need as quickly as they are able to organize and create.

Data in a Peer-to-Peer Economy

Data is a crucial part of a peer to peer economy, as is a means to manipulate, create, maintain provenance of, control, access, secure, message, display, search, query, browse, and aggregate it. If data is not handled correctly, it could mean any efforts to spread the concept would go nowhere.

Manipulation and Creation of Data

Manipulation and creation of data is defined here as the ability to modify existing data or add new data that does not yet exist. These concepts are closely related to other sections mentioned for data, but of particular interest are tools to manipulate and create data in the form of triples that make it easier for people who do not understand or care to understand the technical details. One such set of tools that could enable this mission is the Web Widgets Library (50) created by James Dylan Hollenbach with the guidance of Sir Tim Berners-Lee.

The Web Widgets Library includes some tools that appear similar to HTML forms to increase accesibility to programmers that do not wish to deal with the minute details. Once implemented, it is hoped that this widget library will enable broader adoption of semantic web technologies. A focus is placed on tools that adopt basic functions necessary to build larger tools. One possible idea is to integrate Web Widgets with Dov Dori's Object-Process Methodology (OPM) for the semantic web (51, 52), which allows the display of underlying data in both a visual and natural language based form.

If OPM and Web Widgets were combined along with auxillary things such as better ways to aggregate data, then perhaps an interface could be created that could be intuitive and thus smooth the learning curve for those who either do not wish to understand the technologies or those who are unfamilar with them. One such implementation of OPM that could be considered is Web-OPM (53). Of particular interest is the ability of OPM to display both classes in semantic web ontologies as well as instances of these classes. Focus for this idea is placed on the use of OPM for manipulation and annotation through the instances. The classes may be shown, but modification tools for the end user may be less important. Three widgets from the Web Widgets library that may be of particular importance for manipulation and creation of data are the createinstance widget, the annotator widget, and the edit widget. Hopefully use of these would allow users to click on an instance in the OPM visual diagram and perform an edit. Furthermore, the natural language portion of OPM that is linked to the visual portion could provide an alternative view to the edit. The annotator and create instance widgets also allow for acl (access control language) with the acl ontology (54,55) limiting access to certain people and parties (at least on the document level).

Access of Data

A separate paper by Hollenbach explores using ACL with RDF in more detail (56). At the date of publication in 2009, there were two technologies that had been explored for use with ACL: WebDav and SPARQL/Update. Use with both of these will limit access to triples (chunks of linked data) to certain people with the main difference being that WebDav only limits access on the file level, whereas SPARQL/Update can potentially limit access to particular triples in files or stores of triples. WebDav also requires that the entire file be rewritten, whereas SPARQL/Update only changes certain triples. SPARQL/Update seems like the preferable choice, but at the time of writing there was not a lot of Apache server support for it so it was less likely to succeed. Both SPARQL/Update and WebDav appear to be able to jointly work together with ACL if need be. An an example of this, is the diagram at the end of Tim Berners-Lee's "Socially Aware Cloud Storage" (57), which was first written during the year of Hollenbach's publication. An implementation of WebDav (58) is available at the CamlCity (59) website, with the ability to move between servers included (60). SPARQL/Update (61) is availabile with Apache Jena (62).

Provenance of Data

Provenanance (63) has to do with the with the production of data (64). At a high level, it has to do with Agents, Entities, and Activities. An implementation of provenance called the PROV ontology (65) allows for the recording of provenanance information during the change or creation of entities by activities performed by agents. This can apply to respresentation of the structure of activites in the real world or in the digital or conceptual world. In addition, provenanance can be applied to concurrent version control so that multiple people can record provenance for an activity at once. One such example is R&WBase (66) which is an exploration of Git for triples. For every addition or deletion (or delta) in the virtual triple store with R&WBase there exists metadata describing it. This metadata may be expressed through the PROV ontology.

Provenance has been considered with the Sensorica value network (67). It has been applied to resource types, which are categories of resources (68) or categories of things created or used in processes. In Sensorica's case, provenance refers to where a resource comes from. Perhaps noting Git for triples above could be a major help toward constructing value networks that are more representative of a community since each person would be able to have their own viewpoint (branch) that could later be merged into a master version.

Feasibility of Merging Value Networks

One thing to consider is different viewpoints may never be merged, and the contextualization of each viewpoint may not be exact. The triples stored in R&WBase could be instances of classes and predicates in ontologies. Fuzzy logic could be applied to these instances. A membership function, with weight values ranging from 0 to 1, could be applied to the subject, predicate, or object elements of a triple. When merging, triples could be chosen that have average membership function values for an element with a particular label closest to one. To accomplish this, perhaps the subject and object could be kept constant while membership value of the predicate changes.

It is difficult to get people to assign membership functions to things. A paper for fuzzy ontology development by Tommila et. al (69), where keywords are used to define fuzzy sets of entities, says that it would be too cumbersome or even impossible to define membership functions for entities. Instead, they settled on relationships between keywords. However, even maintaining weights for the relationships was thought to be burdensome. They thought some automated method could help. Perhaps this could involve syntactic and semantic similarities of existing information. ActiveGenLink gets into this, which will be described shortly.

This is mostly an illustrative example since dealing directly with keywords in ontologies may not be a common use to most. It does say however, that computational help might be considered for making dealing with fuzziness in multiple viewpoints easier. In addition, relating branches to an upper ontologysuch as UMBEL, OpenCyc, or SUMO could help for interoperability of branches. For different views that occur at the ontological two additional references for a fuzzy multi-viewpoints ontology (70) and a dissertation for a multi-viewpoints ontologies (71) could be considered.

The finalization of linkages between various Git branches as well as other linked data may be difficult. Properties in different triples may mean the same thing. Words may be spelled differently or have different cases. There also may be a lot of triples to study in each branch to ensure full consideration during merging. One thing to consider for this is ActiveGenLink, which was recently implemented with the SILK framework. In a paper by Robert Isele and Christian Bizer (72), ActiveGenLink is described as a means to generate linkage rules by requiring a user to accept or reject a number of automatically generated link candidates. The number of generated link candidates is targeted to be smaller than the total number of links to consider. This lessens the burden on people. In addition, the method is flexible. Although the generated linkage rules include a transformation operator, a comparison operator, and an aggregation operator, anything can be used that follows the respective function. This generality allows concepts that have been used for unstructured data such as such as semantic and syntactic similarities to be included. Examples of clustering of unstructured data with syntactic and semantic similarities is available in a paper by Vandic et al. (73) using semantic clustering by Specia and Motta (74).

Specia and Motta get into conceptual frameworks called folksonomies built from unstructured data. They do this by relating to existing ontologies as well as things like Wikipedia and Google. They form partial ontologies, or linkages between elements in ontologies. This appears to share similarity with Tommila et al's partonomies (69), in that both describe relationships within clusters of related elements, both are accompanied by similar logic, and both are used similarly. In both cases, their efforts are used to improve search and browsing of data with domain specific information. Further consideration of this material will likely be important in a peer-to-peer economy will likely be important as not all data may exist in a structured form.

Controlling, Accessing, and Securing Data

Controlling, accessing, and securing data are seen here as closely related. Controlling data is defined as allowing a person or party to regulate the accessibility to a particular body of data. Accessing data is seen as the ability of a person or party to view or use data based on a set of permissions in a database or through physical means. Securing data is defined as preventing unwanted people or parties from accessing data when not given permission.

One way to see controlling data is through vendor relationship management (VRM). This concept, which was started by Doc Searls who is now at the Berkman Center for Internet and Society at Harvard University, allows people to own their data and only share it if necessary. The five principles of VRM are (75): [1] Customers must enter relationships as independent actors, [2] Customers must be the points of integration for their own data, [3] Customers must have control of data they generate and gather. This means they must be able to share data selectively and voluntarily., [4] Customers must be free to assert their own terms of engagement, [5] Customers must be free to express their demands and intentions outside of any one company's control. VRM is different than CRM (Customer Relationship Management), that is as businesses typically work with data, as highlighted in Doc Searl's talk at SXSW (76). Using concepts like VRM would work well with semantic web technologies since they are designed to store data decentralized and distributed fashion across a network.

An example of VRM used with semantic web technologies is Project Danube (77) led by Markus Sabadello. The project aims to integrate vendor relationship management, customer relationship management, and the federated social web. Both VRM and CRM were described earlier. The federated social web is described more fully in the W3C incubator group report, "A Standards-based, Open and Privacy-aware Social Web" (78). The federated social web, while not necessarily dependent on semantic web technolgies, may be enhanced by it. An example is FOAF (79), which was mentioned earlier with SSL. A federated social web seems to be largely part of the peer-to-peer economy described here. The reason is a value network seems to be largely a superset of the social network, and many value networks working together in a federated way seem largely a superset of a federated social network. The differences betweem a social network and a value network are outlined by Verna Allee in chapter 7 (80) of her value networks book (16).

For purposes of control, Project Danube uses specialized hardware called the Freedom Box (79). It provides control by limiting access to personal data by acting as its own autonomous device behaving as both a client and a server with the ability to communicate with other Freedom Boxes in a mesh network. This limits physical access since the user can own the Freedom Box as well as access through the network since it does not rely on a third party for communcation. Control is also related to privacy, as the ability to control data influences the ability to maintain privacy. For discussion of control and privacy see Eben Moglen's, "Freedom in the Cloud" (82) talk.

Project Danube is expounded upon more fully in the video "The Three Visions" (83). In addition, the Personal Data Ecosystem Consortium (84) and the writings of Markus Sabadello (85) provide additional information that is related to project Danube.

Accessibility was discussed with things such as FOAF+SSL,WebID and WebDav earlier. Two other things that could be considered for acessiblity are the Creative Commons Rights Expression Language (ccREL) (86) and the Provenance Ontology (65). Review suggests that they might be used to inform access control procedures, but they do not directly provide a means to prevent access.

ccREL has to deal with work and license properties. Work properties have to deal with title, attribution, the type of file, the original source of the modified work, and where more permissions are located. License properties are a bit more extensive and deal with permitted and prohibited uses, required actions for permissions, the legal jurisdiction of a license, the date of deprecation of a license, the date of deprecation of the license, and legal code. Since ccREL is machine readable, it might be possible to use the information in the license for access control and thus make it easier to be sure that license information is not inadvertently violated.

Provenance and Access Control (Change This)

Provenance may be expressed as interactions between Activities, Agents, and Entities. Some terms ccREL may be seen as provenance, but it is not directly its scope. The terms in provenance are more specific to provenance defined by the W3C Incubator Group as, "Provenance of a resource is a record that describes entities and processes involved in producing and delivering or otherwise influencing that resource" (63). Terms from ccREL like permitted and prohibited uses do not seem to apply to this defintion.

Using Provenance with FOAF+SSL, WebID, and the ACL Ontology could be beneficial for a peer-to-peer economy. A guess is that provenance could be seen in how value networks and roles in value networks interact. Further review may be necessary. Breifly however, Bates et al. (87) discussed Provenance with access control in cloud environments. They mentioned that maintaining access control lists (which the ACL Ontology uses) is cumbersome in cloud environments due to time consuming management and data migration. It is not yet known how FOAF+SSL, WebID, and the ACL Ontology would work with the material mentioned in this paper, or if access control lists in a semantic web context present the same problem. Information is sparse. Tim Berners-Lee's Socially Aware Cloud Storage mentioned that Provenance was out of scope. A PhD Dissertation presented by Hassan Takabi (88) does discuss the semantic web, policy management, and policy based access control in the cloud in context of Role Based Access Control which might be derived from Provenance but they make no reference to Hollenbach's work.

Security could be an issue limiting the uptake of a peer-to-peer economy. Efforts should be made to secure data. One starting point that promises extensive information is the Resilient Computing curriculum (89) priovided by the ReSIST project. This curriculum not only provides access to freely available course material on crypotography, but provides aid in designing entire systems that are both dependable and resilient. Perhaps the importance of looking at an entire system perspective, as the curriculum leans to, is the best thing for security. Another deliverable, D38 (90), seems to suggest this when including an abstract on a set of slides on security by S. Smith Williams on page 52: "The best practice for providing a secure system to customers is to ensure that security concerns are provided for upfront, and during all phases of development. Adding security features to an application, after development of the main functionality is complete, is difficult." Fortunately, the coverage of the curriculum is broad. Topics such as management of projects, advanced probability and statistics, computer and network forensics, resilient distributed systems and algorithms, and an application track to nuclear power plants as a safety critical system are included. Mention of nuclear power plants suggests that information is available for safety critical systems as surely a developed peer-to-peer economy would require.

Messaging of Data

Messaging may refer to message passing with peer-to-peer computing, described in an earlier section, short message service (SMS) for cellular networks (91), messaging for Payswarm with HTTP Keys (92), Web Messaging (93), and perhaps many other things.

Messaging for Payswarm with HTTP Keys primary concern is to send and receive secure messages containing linked data for purposes of web payments. This technology is imperative to build the decentralized, distributed, and interoperable peer-to-peer economy as it federates methods for monetary and/or quantified value exchange. HTTP Keys should not be FOAF + SSL with WebID which is geared toward control of access to resources on the Web.

SMS capability used largely with cell phones is more ubiquitous than more advanced network and computer technolgies, such as those mentioned in this proposal, for poor and marginalized propulations (94) (for sure???). However, it may be able to work in conjunction with these technolgies so these people could be included. This might be done by relating SMS to URIs. RFC 5724 (95) enables a SMS URI scheme that works like the mailto URI scheme. It specifies messages that could be sent and received relative to URIs that are dereferencable. This suggests that SMS messages could work with the semantic web, and this be easily integrated with any infrastructure developed.

However, SMS will not work for everyone. The World Wide Web Foundation's VOICES Project (96) was developed to tackle problems with illiteracy, lack of technological experience, disability, and speaking a language not compatible with the keyboard which make it difficult to use SMS.

Web messaging describes ways of communicating between browsing contexts, or environments where documents are presented to the users (97). This could be useful to integrate the various user agents (98) using the web while participating in a peer-to-peer economy.

Aggregation of Data

Aggregation of data was discussed earlier with Tribler and INGA. Another method to consider for aggregation is Swarm Intelligence. Robert Tolksdorf et al. (99) considered this with their model that used the Linda model of generative communication (100) and Ant Colony Optimization algorithms. More specifically, they used ants that operated on Linda's tuplespace with out, in, and rd operations to cluster by syntactic and formal concept similarities. The beautiful thing about this method is no global state is needed to be known for clustering. Decisions by ants are made in a decentralized fashion. This is done using the analyzing the syntax of URIs for syntactic similarities. For the formal concept similarities, parts of ontologies are considered that the ants find in their neighborhood. This is broken down into A-Box and T-Box clustering. In the A-Box clustering, each ant for an out operation has its own local type hierarchy which it merges with each visited node, or ant that lands on the same node the ant is at. This hierarchical information helps ants determine if it is suitable to drop a triple it is carrying a particular node, which works towards a cluster. T-Box clustering has two different sub-levels: the RDF-S-Level and the Schema-Level. The RDF-S-Level is a pre-defined set of triples available by default for each node. Schema-Level clustering deals with statements clustered like A-Box statement, which are useful for determining the conceptual similarity. These include statements defining resources as classes or properties and statements expressing subclass or subproperty relations. Formulas are used to help the ants decide which nodes to visit and when to drop a triple. A type concentration helps ants see how semantically close stored triples are on a node, which influences the probability they will drop their triple there. Pheromones, left by ants transitioning from node to node help guide other ants to find nodes with similar triples. Similar and more in depth information is availabile from Sebastian Koske in his Thesis (101), as well as the version of Linda that Koske and Tolksdorf extended called SwarmLinda that was proposed by Tolksdorf and Menezes (102) and further developed by Daniel Graff (103). The terms A-Box and T-Box are parts of a knowledge base (104), and can been seen in terms of Description Logics (104, 105) the most commonly used logics for the W3C standard OWL (web ontology language.

The interlinking between clustered data may be enhanced with reasoning. This is usually done with description logics, as it is the most frequently used logic with OWL. Some examples of reasoning with DL are given in a reasoning survey in 1997 (link reasoning-survey.pdf) and through review of the popular reasoner Pellet (link pelletReport.pdf). For reasoning with distributed and dynamic resources, as would be the case with a peer-to-peer economy, Kathrin Dentler et al. (106) considered swarm intelligence. According to Dentler, "Swarms are characterized by a number of properties, most importantly lack of central control, enormous sizes, locality, and simplicity." Furthermore they state that, "Those properties result in advantageous characteristics of swarms, such as adaptiveness, flexibility, robustness, scalability, decentralization, parallelism, and intelligent system behaviour." These are exactly the characteristics desired for a peer-to-peer economy, as exemplified by stigmergic collaboration (107). In addition, their model also does something else that is advantageous: reasoning is made over distributed data that remains local to the hosts. This preserves the locality, which is cited as important with Project Danube and the Personal Data Ecosystem Consortium presented earlier. Thus, it fits well in context. The reasoning methodology they use involves autonomous agents that traverse the RDF graph and expand it with at most one of many basic inference rules. Like Tolksdorf's work, pheromones are left by agents that guide futher movement of othe agents, and inspiration is drawn from ant colony optimization. In Dentler's case, these agents are called beasts instead of ants. The pheromones allow for stigmergy, which is embodied in swarm intelligence. Futher work is still needed, but the idea works in principle.

According to Dentler, Swarm Intelligence has been used with the Linda model. This invites use. Unfortunately, according to Koske (101), a co-author of Tolksdorf's paper, Linda has trouble with distributed SPARQL queries, that is distributed queries over RDF using a W3C recommendation called SPARQL (108), because quantifiers such as ALL must cover the whole space and cannot be distributed. These queries would likely be needed in a peer-to-peer economy. A solution for SPARQL queries with data partitioned across multiple machines was suggested by Huang et al. (109) using the MapReduce framework. MapReduce has a more constrained model than Linda. In Linda, both space and time are uncoupled. Space is uncoupled since any data added by any process to the data space may be removed by any other process. Time is uncoupled since any process may add or remove a piece of data from the data space at any time. In the contrary, MapReduce relies on key value pairs that are destined for other processes by virtue of their labelling, so space appears coupled. In addition, it appears time is coupled. If one of the compute nodes fails, all of the processes at that node must be redone even if they are completed (29). MapReduce also favors large-scale and mostly static data sets. The apparent lack of both space and time uncoupling may not be ideal for stigmergic collaboration with data since there really is no centralized plan. Is focusing on MapReduce solely a requirement though? Could Linda be responsible for aggregating data into clusters and reasoning over it, while MapReduce is responsible for SPARQL querying? Perhaps, if the data was known and did not change too greatly, and the tasks were parallizable this would be possible. However, MapReduce jobs also tend to be time consuming. One way that Huang dealt with mimimizing time spent with MapReduce was by constructing an n-hop guarantee. This allowed for some overlap between linkages between clusters of data in order to guarantee that SPARQL queries remain local as much as possible. Only non-local queries required MapReduce. If a model with characteristics of MapReduce and Linda needed to be developed, process alegbra (110) formalisms could be used to investigate performance as well as behavior for the resulting outcome. A CCS (111) formalism was developed for Linda (112), and a CSP (113) formalism was developed for MapReduce (114). A comparison between CCS and CSP is available from Gleebeck (115). In addition, a comparison of CSP and Linda is available in Finkel (116). Further investigation is needed. Process algebra may be part of the answer, if at all. In addition, in Haung's paper it appears that the focus is on partitioning a known RDF dataset across many machines using the connections between nodes and edges as a guide, rather than building one of unknown size from distant pieces of the web. Whether it is possible to build a state that could then be fed to Huang's method remains in question. In addition, it should be investigated whether a peer-to-peer architecture is truly appropriate. Tolksdorf, Bontas et al. (117) make statements that suggest that tuplespaces, the basis for Linda and SwarmLinda, are not peer-to-peer networks. They claim that, "the P2P approach introduces robustness and flexibility at the expense of efficiency" and say that the communication overhead for tuplespaces is lower than P2P networks.

Querying of Data

SPARQL queries are difficult for a user with no knowledge of the underlying ontology or familiarity with semantic web technolgies. Enrico Franconi et al. proposed a solution using the ontology to guide users queries. His paper, "An intelligent query interface based on ontology navigation" (118) describes this solution. The queries are guided by visual menus relying on the structure of the ontology. These menus allow for addition and deletion of terms or subsitution of terms for other terms that may be of a different scope, either equivalent, more general or more specific. Each query is also represented to the user by a continuous string of natural language text, which is a composed of text constituents translated from a knowledge base, which is a form that a computer can use (119), by means of a lexicon and template map.

As a side note, SPARQL, at least in 2009, was an incomplete query language for OWL. Martin O'Connor and Amar Das of the Stanford Center for Biomedical Informatics Research proposed a query language for OWL called SQWRL (120). It allowed for set operators. It is guessed that this may allow for quantification when developing trading and brokering services in economies.

That natural language of the queries in Franconi's paper might be combined with Dov Dori's OPM mentioned earlier. If this is feasible, then the object-process diagram that is part of OPM would produce a visualization, or picture, of the query. The result could also be displayed as a visualization. Following the result, adjacent triples could be exposed using follow your nose (121) browsing of linked data (122).

Different languages might be considered using something like BabelNet (123, 124) which combines the highly structured lexico-semantic relations from WordNet and the multi-lingual and continually updated parts from Wikipedia to create a multi-lingual lexicon. This lexicon, descributed further with automated creation (125) and in terms of the semantic web technology RDF (126) might be used for natural language processing techniques such as NLG (natural language generation) (127) used for the text in Franconi's paper. Various techniques such as template-based generation, rule-based generation, and trainable NLG could be considered (128). WordNet, a lexicon in English which was used in BabelNet, has already been studied for NLG in Hongyan Jing's paper (129). Since BabelNet relies on a mapping between Wikipedia and WordNet, arguments from Hongyan Jing's paper might be used as a guideline.

Visualization of Data

Visualization of linked data is useful to an end user that has no technical knowledge of the technologies involved nor domain knowledge of information contained in the data. Visualizations take advantage of human perceptual abilities, and reduce the cognitive overload that could occur with text based presentations. Aba-Sah Dadzie and Matthew Rowe (130) make this point. In addition, they mention various browsers have been developed that can deal with Linked Data or at least XML or RDF. One form of visualization is node and arc diagrams, where the nodes are subjects or objects and edges are predicates. In addition, other visualizations for data exist such as facets for faceted browsing, various forms of charts, and icons or colors for different groups of similar data. Dadzie and Rowe's paper mention things such as Fenfire and OpenLink Data Explorer for visualizing linked data as node-arc diagrams. In addition they mention special features that exist in either RDF Gravity and RelFinder such as color coding to highlight related data properties, aggregation of links based on data attributes, point and select functionality for nodes, text overlays to display info about nodes, and zoom an panning of graphs. They also point out that fresnel lenses (131) are popular to defining implementation independent templates for data presentation. These lenses allow for selective representation of linked data to make presentation clearer to the end user. The Object Process Methodology mentioned earlier may also fit into their defintion, and it would be interesting to explore if fresnel lenses can work in conjunction. Other tools that produce node-arc diagrams from RDF that can be stunning visually are CytoScape (132), Gephi (133), and VUE (134). RKBExplorer (135), developed by the ReSIST project mentioned earlier, is another tool that was mentioned by Dadzie and Rowe that produces node-arc diagrams that are clickable and produce additional information in adjacent windows. This is also like Open Link Data Explorer, which also produces additional visualizations for Linked Data. Those that do not include visualizations but are still useful to the end user are things such as Marbles (136) which color codes for each source of a particular fact of triple shown and Tabulator (137) loads RDF graphs and color codes for the different entities. They also mention that it is important to have a good starting place for lay users when browsing the world of linked data and that having URIs as a starting point presents a challenge. It is also good for users to remember where they have been. In addition, they mention that it is useful for a broad overview of the data to be presented to aid in browsing.

Entering the Data Space

The method of achieving a starting place for linked data varies. One method is geographical location, such as in DBPedia mobile (138).Another method is though full text search, such as RDF gravity (138.5). A third method is Ontology browsing, such as with RKB Explorer mentioned above. Another method is through a natural langauge entry point, such as PowerAqua that "...supports users in querying and exploring Semantic Web content" (139, 140). Out of these, Natural language is very attractive. It provides the end user with a low learning curve.

Unfortunately, PowerAqua relies on a centralized index in Watson (141). Perhaps YaCy (141.5, 141.75) could be used instead for the index. Since there are 4 predefined servers with YaCy perhaps servers could instead alternate as masters and slaves like in the P2P-MapReduce paper mentioned earlier.

Love and the Real World

Love and the real world refers to our humanity and the challenges of implementing a peer-to-peer system. One aspect is moral implications and benefits of a peer-to-economy. A second is is cooperating amongst groups in a distributed topology. A third is cultural intertia in organizations. A fourth is intellectual property. A fifth is possibilities for the future.

Moral Implications and Benefits of a Peer-to-Peer Economy:

Implementing this peer-to-peer economy as a technological solution may have moral implications due to the necessity for change for those working within the the traditional hierarchical industrial model with job titles and experts. Those who hold a position that makes those in society regard them as experts will be tested for what they truly know. Those who have job titles to support experts or those in authority may find that their job is no longer needed.

Doug Englebart sheds light on how people will need to adjust to technology. For this case, let it be applied to peer-to-peer technology. In his talk about the role and impact of technology for the institution (156), he said that human paradigms would have to change in conjunction with advancements in technology. He claimed that organizations would need a strategy of how they are going to undergo a co-evolution with technology to deal with complex changes at a rate never seen before. They would need to harness their collective IQ, or collective capability, and look at how they are going to share ideas, costs, knowledge, and solutions to improve their business. Those that did not adjust would perish.

There could be a positive side of a peer-to-peer economy. In a pure sense, decision making would be distributed and democratic. It would be like the phenomenon described in the book Wikinomics with Wikipedia as an example. Anyone can contribute in an open way regardless of their background or position. People could also choose what they wish to contribute to, and what it what is referred to today as hobbies or personal interests could be what people spend their time doing to make a living.

Cooperation Amongst Groups in a Distributed Topology:

This peer-to-peer economy would hold the topology of a distributed economy. This topology was described by Allan Johansson (157). In this form, there is no central node or node with many more connections than another. Due to the scale and organization of nodes in a distributed way, collaboration in this topology would likely occur by stigmergy, rather than by direct communication.

Stigmergy in the economy may be explained by Mark Elliott's work (105), where group behavior changes depending on group scale. Thus, different things will need to be considered depending on how large the group is. In smaller groups, less than 25 people, collaboration occurs with social negotiation. In groups greater than 25 people, collaboration occurs by stigmergy /administration.

This stigmergy would result in self-organization of geographically disperse teams. Various members and teams could produce deliverables that others could use in an open source hardware or software setting or any other setting that lacks hierarchical structure.

Furthermore, stigmergy would be enabled by certain rules and permissions, such as habits and licenses. The effect of habit is illustrated in the megachurch, where people can be included in smaller and larger groups (as would be the case in a peer-to-peer economy). Discussion of this is found in an article in Christianity Today written about Charles Duhigg's book, "The Power of Habit: Why We Do What We Do In Life And Business" (150) there exists both a single large group and a number of smaller groups within a church. The larger group forms the weak ties and gives people a feeling of being part of something big, while the smaller groups allow for intimacy by strong ties. People in the church also develop habits, which helps them grow as individuals and be good Christians. At Saddleback, a megachurch led by Rick Warren, these habits include having a daily quiet time, involvement in a small group, and tithing ten percent.

In a peer-to-economy these habits would be different. They might be the hacker code of ethics illustrated by Steven Levy. In the book "Hackers: Heroes of the Computer Revolution", he described the hacker ethic as promoting sharing, openness, decentralization, free access to computers, and world improvement (150). Perhaps it is these habits that would hold the peer-to-economy together. Similarly, Chris Anderson's Ten Rules of Maker Businesses, mentioned earlier (6) could play a role. For example, the trust he described in the marker movement that allows for the exchange of designs could be seen as a habit that holds the community together.

Licenses could include the creative commons licenses mentioned earlier using ccREL or things such as The Free Art License (169). In addition, licenses such as CERN-OHL (170) , the Solderpad HL (171), and the TAPR OHL (172) which include a patent license clause (173) could be considered.

Megachurches are corporations according to Scott Thumma (152). He described this as including an executive board which appears to be an administration. In a peer-to-peer economy, this administration is replaced by stigmergy. Decision making is fully decentralized with people taking the role as leader or expert at one time and follower or learner at another time. People may be organized in small groups that could have an hierarchal structure organized by role and reputation, but this would only be temporary for the duration of the project. The larger group, consisting of many smaller groups, would be organized stigmergically. The stigmergic collaboration amongst the groups would allow groups to make their own decisions and avoid communication overhead. So the peer-to-peer economy would be like a megachurch with small groups and habits, but it would have stigmergic collaboration amongst the small groups instead of an administration.

Said another way, in a peer-to-economy, all of the weak ties would be formed by people working together without knowing it, and perhaps feeling a means of community through the hacker ethic. Moreover, strong ties may exist in the smaller groups that form, and each of these groups could have their own value network, and own product.

Stigmergic collaboration between the groups could change depending on whether the resources concerned are rivalrous, non-rivalrous, excludable, or non-excludeable. Rivalry (153) of a certain resource could prevent people from making an infinite number of copies of a particular item or cause two groups to have to compromise or cooperate when pursing their separate goals. Excludability (154) of a certain resource could cause certain groups from being able to use it if the cost was sufficiently high.

If a resource is non-rivalrous and non-excludeable then it is free for everyone to use and there is no great chance that it will be used up. This could include both free and open source software that is also free as in beer. This could be the easiest to adapt and add to, and licenses have helped make this happen. In a peer-to-peer economy, resources like this could be crucial for its growth.

If a resource is non-rivalrous and and excludeable then anyone could use it as long as they could pay. This could present a problem for sharing. An example of this is legal hassles with sharing proprietary media, such as software and music, through peer-to-peer networks. See Napster or Kazaa. This could present some problems for contributors in a peer-to-peer economy as they would either have to be subsidized, take on some sort of loan, or in some way afford to purchase the resource to adapt and add to. A centralization problem could occur with these resources, as some items that you have to purchase you also have to ask to adapt or add to them.

If a resource is rivalrous and excludable, then people could use it as long as they had the ability to pay and as long as the resource remained available. An example of this could be the raw materials used by the peer-to-peer economy. Their cost could present a bottleneck for some in the peer-to-peer economy, as not all those who would want to use the resource would be able to use it due to cost (given that the resource is not depeleted). One possible way to lower the costs is to make the cost of refining the raw materials as low as possible. It may be the case for some raw materials that a centralized facility for refining is the least expensive solution. There may be a trade off between autonomy from refining raw materials in small batches compared to cost savings from refining in large batches.

If a resource is rivalrous and non-excludeable, then people could use it as long as the resource remained available. This is also referred to as a common-pool resource. Elinor Ostrom presented on the implications of the use of common-pool resources in a community (155). In her talk, she suggested that people would want to cooperate if they were sure that others would engage in reciprocity. The likelihood of people cooperating also increased with levels of communication. If communication was absent, then people would overharvest the resource. Elinor Ostrom wondered whether court systems could be made more effective to air concerns to prevent overharvesting.

It is interesting to note that in stigmergic collaboration, direct communication is absent. Thus following Ostrom's logic, could overharvesting occur with this form of collaboration? Could societal habits be used to prevent overharvesting in the same way that habits are used in megachurches to help people be good Christians? What, if any role would the court system play?

Cultural Inertia in Organizations:

Organizations may have difficulty knowing where to begin when adopting semantic web technologies. Expecting employees to contribute to open value networks may prove to be difficult. In addition, all of the challenges presented in the moral implications section would also apply.

Open value networks could be built with semantic web technologies and may likely experience the difficulties of implementing these web technologies even with the reward incentive brought upon by the open value network. In this respect, Tommila et. al (69) speak about the effort in maintaining fuzzy relationship weights in ontologies. They say that industry experts in the process industry were worried about the amount of effort. In the same vein, Robert Isele et al. (72) speak of the tremendous effort to bring together various entities in linked data, a semantic web technology, which represent the same real world object.

The same challenges encountered in the utilization of Big Data in the Oil and Gas industry may occur in the utilization of structured data in semantic web technologies. In a brief by Bain and Company (174), teams have difficulty with: finding the path to value creation, the implications of technology strategy, operating model, and organization. In addition they argued that corporate leadership had difficulties finding the right capabilities and talent.

To bridge the gap to linked open data (122) which could be used in a peer-to-peer economy it may be useful to have an example to show things work. Fortunately, the Obama Administration has been spearheading initiatives in open government and open data with their Open Government Directive in 2009 (166), Open Data Policy in 2013 (167), and Project Open Data (168) on Github to help with adoption of Open Data Policy. This element will at least get more thinking about data in an open format.

To encourage participation, it may be useful to revisit the journey and goals of Tim Berners-Lee. He begins with his 1989 proposal, " Information Management: A Proposal" (175) where he talks about ways to keep track of large projects in the face of high turnover at CERN. He then continues to describe his vision in his book, "Weaving the Web" (176) where he talks about the history of the world wide web project and prospects for the future, such as with the semantic web. In addition, it may help to offer the argument that semantic web technologies may help people to better analysis in order to organize teams more effectively.

Intellectual Property:

Intellectual property, such as trademarks, copyrights, and patents will need to be explored. Trademarks may need to protected for each particular project. Copyright law will need to considered with a more fine grained approach than public and private domain. Patents will need to be consided for each industry as appropriate.

Trademarks allow people to uniquely identify themselves with their work. Tarus Balog argued that reputation is protected with trademarks within an open source project (177). He felt that this was important due to the extensive work put into developing open source projects.

Copyright is examined in a more fine grained approach with the creative commons rights expression language (86) mentioned earlier. Creative commons sought "widely applicable licenses that permit sharing and reuse". They defined terms under which work could be used beyond the default copyright, as well as ways to limit these terms. They also defined certain actions that the user must take under these terms.

Patent discussion is an area that will need to be expanded upon. The following is a brief sampling of studies about the patent system including those that are against the patent system as in Boldrin and Levine, and those that are for the patent system as in Epstein. Michele Boldrin and David K. Levine argue in the "Case Against Patents" (144) that the patent system should be abolished since there is no evidence that it increases research and productivity, no evidence that a greater number of patents correlates to greater productivity, and strong evidence that patents have many negative consequences. However, they argue that weak patent systems may mildly increase innovation with weak side effects, but strong patent systems retard innovation with many negative side effects. Strong patent systems, they argue, are largely promoted by old and stagnant industries, not new and innovative ones.

An additional thing Boldrin and Levine mention is that new innovators are subject to legal action from earlier patent holders, and many modern products are made of many different components. In addition, the patent literature is complex to navigate which makes avoiding enchroaching previous patents a hassle. The Cambia initiative (145) recognized this, and developed PatentLens which is now called The Lens (146). What Richard Jefferson termed Uncertainty, Fear, and Doubt in his article, Science as Social Enterprise (147) can cause people to lose confidence in the prospects of creating a viable technology-driven enterprise. According to Jefferson, one solution has been for organizations to form patent pools, and then cross-license to obtain platforms of enabling technologies. But this has favored large multi-nationals. It also threatend a sustainable private competitive advantage, and public value and confidence has been destroyed. However, recently, some leading technology firms have decided that an open source model could lead to higher private and public returns.

In contrast, Richard A. Epstein from the Hoover Institution questions whether there really are too many patents in America (148). They observe the push for expanded public domain by people who think it will increase innovation. However, they say that the clear thrust in anti-patent fervor is that innovators do not recieve sufficient legal protection for patents even when they were responsible for the invention. They point out Richard Ponser who sat on sat as the trial Judge for Apple vs. Motorola, dismissed it, and then wrote an article (149) discussing the problems posed by the structure and admininstration of current patent laws. They believe this is part of a larger scale academic attack on market institutions that has led to and economy with 1.5 percent annual GDP growth. They say the debate in the area can be summarized as the belief that strong patent protection is a threat to the operation overall legal system, and the solution is to narrow the scope of the patent system.

However, regardless, things such as a the Patent system may not be enough. For example for educational institutions, the Brookings Institute States,"Schools need better metrics for measuring commercialization and technology transfer. Right how, there is too much focus on numbers of patents and startups without determining with products make it to the marketplace and what type of social and economic impact they have. We would be in a stronger position to address innovation road blocks if we had more complete data on commercialization." (1, pg 17) Perhaps interactions between value networks could help fill this void?

Possibilities for the Future

Will a peer-to-peer economy hold promises for being productive, enabling new opportunities that have never existed before, or will it follow what Robert Oppenheimer said during Trinity, the first atomic bomb test, "Now I am become Death, the destroyer or worlds (--Bhagavad-Gita)" (157). There could be networked production as with modular design as mentioned in Chapter 5 of Kevin Carson's Homebrew Industrial Revolution (158) that built upon fabrication technologies presented in MIT's course, "How to Make (almost) Anything" (159), as well as the infrastructure mentioned above. One additional piece of software that could help make this possible is Zach Hoeken Smith's Botque (160), which allows commands to be sent to multiple 3D printers over the network. In addition, one might wonder whether automation could possibly be extended to a large industrial scale. One example is the Euromax-terminal Rotterdam (161) which had as its goal to "limit the time that drivers spent at the gate and automate the entire interaction" (162). Could automation such as this free up more resources for creativity? On an even more advanced level, manufacturing has been proposed with no human interaction at all with Nasa's Advanced Automation for Space Missions (163) which discusses self-replicating systems. Interestingly, technology described in Nasa's report is similar to technology put forth in the mission for the Center for Bits and Atoms at MIT (164) which launched Fab labs as an outreach project (165) and used the technologies in the course "How to Make (almost) Anything" mentioned earlier.

Table of Links:

(1)Darrel West et al., Building an Innovation-Based Economy, Governance Studies and Brookings, November 2012, http://www.brookings.edu/~/media/research/files/papers/2012/11/13%20innovation%20technology%20west%20friedman%20valdivia/innovationbased%20economy.pdf
(2)Restoration Drama: America's under-appreciated community colleges hold promise, The Economist, April 28th, 2012, http://economist.com/node/21553476
(3)Sir Ken Robinson, Do schools Kill Creativity?, YouTube, January 6, 2007, https://www.youtube.com/watch?v=iG9CE55wbtY
(4)Sugatra Mitra, Build a school in the Cloud, https://www.youtube.com/watch?v=y3jYVe1RGaU
(5)Lester, Simon, Liberalizing Cross-Broder Trade in Higher Education: The Coming Revolution of Online Universities, Policy Analysis, CATO Institute, No. 720, February 5, 2013 http://www.cato.org/publications/policy-analysis/liberalizing-cross-border-trade-higher-education-coming-revolutio
(6)Chris Anderson, Ten Rules for Maker Businesses, November 16, 2010, http://blog.ponoko.com/2010/11/16/ten-rules-for-maker-businesses-by-wireds-chris-anderson-%E2%80%94-rule-1/
(7)Introduction to ISO15926, FIATECH, October 2, 2011, http://www.posccaesar.org/wiki/ISO15926Primer
(8)iRINGUserGroup, http://iringug.org
(9)Simantics: OpenSimulation Platform, VTT Technical Research Centre of Finland, January, 24, 2013, https://www.simantics.org/simantics/documents/Simantics_Presentation_17_compressed.pdf
(10)Simantics Platform, Brochure, https://www.simantics.org/simantics/documents/Simantics-brochure_11.pdf
(11)Gayer, Marek et al., CFD Modelling as an Integrated Part of Multi-Level Simulation of Process Plants £45 Semantic Modelling Approach, VTT Technical Research Centre of Finland, Proceedings of the 42th Summer Computer Simulation Conference (SCSC'10), p. 219£45;227, 2010. https://www.simantics.org/simantics/documents/gayer10scsc.pdf
(12)Gayer, Marek, CFD Modelling as an Integrated Part of Multi-Level Simulation of Process Plants £45 Semantic Modelling Approach, OpenFOAM User's Day, 2010, VTT Technical Research Centre of Finland, http://www.marekgayer.com/data/mg/download/publications/gayer10scsc.ppt,
http://static.squarespace.com/static/51bec4d2e4b0d7c68c769459/51bec58ee4b084d49489bb8d/51bec598e4b084d49489bc76/1371456920022/FinnishOpenFOAMUsersDay2010_GayerVTT.pdf?format=original
(13)Calbimonte, Jean-Paul et al., Semantic Sensor Data Search in a Large-Scale Federated Sensor Network, http://ceur-ws.org/Vol-839/calbimonte.pdf
(14)Raggett, Dave, Smart cities as a web of people, things and services, Workshop 2, Web technologies for Smart Cities  Internet of Things, March 14, 2013, http://www.w3.org/2013/Talks/smart-cities-dsr-mws-2013.pdf
(15)Value Network £45 P2P Foundation, Last Modified: July 11, 2012, http://p2pfoundation.net/Value_Network
(16)Allee, Verna et al., Value Networks and true nature of collaboration, ValueNet Works and Verna Allee Associates, 2011, http://www.valuenetworksandcollaboration.com/
(17)Value Network, http://en.wikipedia.com/wiki/Value_Network
Last Modified: April 8, 2013
(18)SENSORICA, http://www.sensorica.co/home
(19)Tiberius Brastaviceanu, Sensorica, an open, decentralized, and self-organizing value network, May 26, 2011, http://www.managementexchange.com/story/sensorica-open-enterprise-making
(20)McCarthy, William E., The REA Accounting Model: A Generalized Framework for Accounting Systems in a Shared Data Environment, The accounting review, Vol. LVLL, No. 3, July 1982, http://www.msu.edu/user/mccarth4/McCarthy.pdf
(21)McCarthy, William E., An REA Model of an Economic Exchange, Michigan State University, http://www.msu.edu/user/mccarth4/cookie--elmo--basic%20REA.ppt
(22)William E. McCarthy, Homepage, http://www.msu.edu/user/mccarth4
(23)Milojicic et al., Peer-to-peer Computing, HP Laboratories, Palto Alto, March 8, 2002, http://www.cs.ucsb.edu/~almeroth/classes/F02.276/papers/p2p.pdf
(24)Pouwelse, J.A. et al., Tribler: A Social-based Peer-to-peer system, http://iptps06.cs.ucsb.edu/papers/Pouw-Tribler06.pdf
(25)Loser, Alexander et al., Semantic Social Overlay Networks, IEEE Journal on Selected Areas in Communication, Vol. 25, No. 1, January 2007, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72.7668&rep=rep1&type=pdf
(26)Nguyen, The Tung et al., High Performance Peer-to-Peer Distributed Computing with Application to Obstacle Problem, IPDPSW'10: IEEE International Symposium on Parallel Distributed Processing, Workshops and PhD Forum, 2010, pp. 1£45;8, http://homepages.laas.fr/elbaz/HOTP2P%20v6.0-1.pdf
(27)Curtis F. Gerald, Patrick O. Wheatley, Applied Numerical Analysis, 7th ed., Pearson, 2004, p. 76
(28)Seo, Sangwon et al., HAMA: An Efficient Matrix Computation with the MapReduce Framework, 2nd IEEE International Conference on Cloud Computing Technology and Science, 2010, http://csl.skku.edu/papers/CS-TR-2010-330.pdf
(29)Rajaraman, Anad et al., Mining of Massive Datasets, Stanford University, 2013, pp. 19 £45 67 http://infolab.stanford.edu/~ullman/mmds.html
(30)Marozzo, Fabrizio et al., P2P-MapReduce: Parallel data processing in dynamic Cloud environments, Journal of Computer and System Sciences, May 20, 2011, http://www.academia.edu/2821203/P2P-MapReduce_Parallel_data_processing_in_dynamic_Cloud_environments
(31)JXTA, Wikipedia, http://en.wikipedia.org/wiki/JXTA, Last Modified: June 5, 2013
(32)P2P-MPI, http://grid.u-strasbg.fr/p2pmpi/
(33)St£232;phane Genaud and Choopan Rattanapoka, A Peer-to-Peer Framework for Message Passing Parallel Programs, in Parallel Programming, Models and Applications in Grid and P2P Systems, vol. 17, pages 118£45;147, Ed. Fatos Xhafa, Advances in Parallel Computing,IOS Press, ISBN:978-1-60750-004-9, June 2009. http://icps.u-strasbg.fr/upload/icps-2009-216.pdf
(34)Didier EL BAZ, The Tung NGUYEN, A self-adaptive communcation protocol with application to a high performance peer to peer distributed computing, 2010 18th Euromicro Conference on Parallel, Distributed and Network-based Processing, http://www.computer.org/csdl/proceedings/pdp/2010/3939/00/3939a327-abs.html
(35)Bertsekas, Dimitri P. et al., Parallel and Distributed Computation: Numerical Methods, Athena Scientific, Belmont, Mass., 1997, http://web.mit.edu/dimitrib/www/pdc.html
(36)JXTA - The Language and Platform Independent Protocol for P2P Networking, http://jxta.kenai.com
(37)Buyya, Rajkumar et al., Economic models for resource management and scheduling in Grid computing, Concurrency and Computation: Practive and Experience, 2002, 1:1507-1542, http://gridbus.cs.mu.oz.au/papers/emodelsgrid.pdf
(38)Douglas Mark Rushkoff, Monpoly Moneys: The media environment of corporatism and the player's way out, Dissertation, June 25, 2012, http://igitur-archive.library.uu.nl/dissertations/2012-0712-200619/rushkoff.pdf
(39)Martin Wolf, Comment on Andrew G Halldane, £34;Control rights (and wrongs)£34;, Wincott Annual Memorial Lecture, 24th October 2011, www.wincott.co.uk/lectures/Wolf_comment_on_Haldane_Lecture.doc
(40)David Standish, The art of money: the history and design of paper currency from around the word, Chronicle Books, San Francisco, 2000
(41)Ryan Fugger, Money as IOUs in Social Trust Networks  A Proposal for a Decentralized Currency Protocol, April 18th, 2004, http://archive.ripple-project.org/decentralizedcurrency.pdf
(42)Apita Gosh et al., Mechanism Design on Trust Networks, 2007, http://research.yahoo.com/files/final.pdf
(43)Google Ventures - Google+ - Excited to welcome OpenCoin to the portfolio. Making waves... , https://plus.google.com/+GoogleVentures/posts/685GvnNe4UK
(44)Webcredits £45 Currency for the Digital Age, Taskify, http://webcredits.org/
(45)Bitcoin £45 Open Source P2P Money, http://bitcoin.org/en/
(46)Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System, http://bitcoin.org/bitcoin.pdf
(47)Bitcoin, Wikipedia, http://http://en.wikipedia.org/wiki/Bitcoin
(48)Application of FinCEN£39;s Regulations to Persons Administering, Exchanging, or Using Virtual Currencies, Guidance, Department of the Treasury £45 Financial Times Enforcement Network, March 18th, 2013, FIN-2013-G001,www.fincen.gov/statutes_regs/guidance/pdf/FIN-2013-G001.pdf
(49)bshambaugh.org £45 MNDF Project, April 22nd, 2013, http://bshambaugh.org/MNDF_Project.html
(50)James Dylan Hollenbach, A Widget Library for Creating Policy£45;Aware Semantic Web Applications, Master£39;s Thesis, Massachussets Institute of Technology, June 2010, http://dig.csail.mit.edu/2010/rdf-widgets/thesis.pdf
(51)Dov Dori, The Visual Semantic Web: Unifying Human and Machine Semantic Web Representations with Object£45;Process Methodology , Technion, Israel Institute of Technology, Massachussets Institute of Technology, http://www.cs.uic.edu/~ifc/SWDB/papers/Dori.pdf
(52)Dov Dori, Object£45;Process Methodology and Its Application to the Visual Semantic Web, Pre-Conference Tutotial, PT 1, ER-2003, Chicago, October 12th, 2003, Technion, Israel, MIT, USA, http://www.er.byu.edu/er2003/slides/ER2003PT1Dori.pdf
(53)web-opm £45 Web£45;based Object-Process Methodology (OPM) CASE Tool, Google Project Hosting, https://code.google.com/p/web-opm/
(54)WebAccessContol £45 W3C Wiki, http://www.w3.org/wiki/WebAccessControl
(55)http://www.w3.org/ns/auth/acl
(56)James Hollenbach et al., Using RDF Metadata To Enable Access Control on the Social Semantic Web, ISWC 2009, 2009, http://dig.csail.mit.edu/2009/Papers/ISWC/rdf-access-control/paper.pdf
(57)Tim Berners£45;Lee, Socially aware cloud storage, Design Issues, 2009, http://www.w3.org/DesignIssues/CloudStorage.html
(58)L. Dusseault, Ed., HTTP Extensions for Web Distributed Authoring and Versioning (WebDAV), RFC 4918, June 2007, http://tools.ietf.org/html/rfc4918
(59)Informatikburo Gerd Stolpmann, WebDav Reference Manual, http://projects.camlcity.org/projects/dl/webdav-1.1/doc/html-main/index.html
(60)Informatikburo Gerd Stolpmann, Class type Webdav_client.webdav_client_t,
http://projects.camlcity.org/projects/dl/webdav-1.1/doc/html-main/Webdav_client.webdav_client_t-c.html
(61)Paul Gearon et al., SPARQL 1.1 Update, W3C Recommendation, 21 March 2013, http://www.w3.org/TR/sparql11-update/
(62)ARQ - SPARQL Update, Documentation - Apache Jena, http://jena.apache.org/documentation/query/update.html
(63)W3C Provenance Incubator Group, Overview of Provenance on the Web, Semantic Web Activity, November 30, 2010, http://www.w3.org/2005/Incubator/prov/wiki/images/0/02/Provenance-XG-Overview.pdf
(64)Yolanda Gill, Simon Miles Ed., PROV Model Primer, http://www.w3.org/TR/2013/NOTE-prov-primer-20130430//
(65)Timothy Lebo et al. Ed., PROV-O: The PROV Ontology, http://www.w3.org/TR/prov-o/
(66)Miel Vander Sande et al., R&Wbase: Git for triples, LDOW2013, May 14, 2013, Rio de Janeiro, Brazil, http://ceur-ws.org/Vol-996/papers/ldow2013-paper-01.pdf
(67) Value Network £45 P2P Economy, Resource type, Last modified: September 26th, 2013, http://valuenetwork.referata.com/wiki/Resource_type
(68)Value Network £45 P2P Economy, Resources, Last modified: August 8th, 2013, http://valuenetwork.referata.com/wiki/Resource
(69)Teemu Tommila, Juhani Hirvonen, Antti Pakonen, Fuzzy Ontologies for retrieval of industrial knowledge £45 a case study, VTT Working Papers 153, 2010 http://www.vtt.fi/inf/pdf/workingpapers/2010/W153.pdf
(70)Asma Djellal, Mounir Hermam, Zizette Boufaida, An Extension of the Ontology Web Language with Viewpoint and Fuzzy Notions http://umc.edu.dz/vf/images/misc/session3A/24-3A-paper3-Asma%20djellal.pdf
(71)T£224;nh L£233 BACH, Construction d'un Web s£233;mantique multi-points de vue, Th£232 doctorat en sciences, £200le des Mines de Paris, Sophia Antipolis, pp. 42. le 23 octobre 2006, http://pastel.archives-ouvertes.fr/docs/00/50/02/93/PDF/These_BACH-Thanh-Le.pdf
(72)Robert Isele, Christian Bizer, Active Learning of Expressive Linkage Rules using Genetic Programming, Journal of Web Semantics, March 10, 2013, http://dws.informatik.uni-mannheim.de/fileadmin/lehrstuehle/ki/pub/IseleBizer-ActiveLearningOfExpressiveLinkageRules-JWS2013.pdf
(73)Samir Vandic et al., A Semantic Clustering-Based Approach for Searching and Browsing Tag Spaces, SAC'11, March 20-25, 2011, http://people.few.eur.nl/fhogenboom/papers/sac11-stcs.pdf
(74)Lucia Specia and Enrico Motta, Integrating Folksonomies with the Semantic Web, Knowledge Media Institute £45 The Open University, http://people.kmi.open.ac.uk/motta/papers/SpeciaMotta_ESWC-2007_Final.pdf
(75)Project VRM £45 Main Page, http://cyber.law.harvard.edu/projectvrm/Main_Page
(76)Doc Searls, Are Free Customers Better Than Captive Ones? http://schedule.sxsw.com/2012/events/event_IAP13611
(77)Project Danube , http://projectdanube.org/
(78)A Standards-based, Open and Privacy£45;aware Social Web, http://www.w3.org/2005/Incubator/socialweb/XGR-socialweb-20101206/http://www.w3.org/2005/Incubator/socialweb/XGR-socialweb-20101206/
(79)The Friend of a Friend (FOAF) project, http://www.foaf-project.org/
(80)Value Networks and the true nature of collaboration, Chapter 7: Deep Dive into the Methodology SNA / VNA / ONA, http://www.valuenetworksandcollaboration.com/deepdive/snavnaona.html
(81)FreedomBox Foundation, http://freedomboxfoundation.org/
(82)Eben Moglen, 'Freedom in the Cloud' £45 Software Freedom, Privacy and Security for Web 2.0 and Cloud Computing, Friday, February 5th, 2010, 7pm, NYU, https://www.youtube.com/watch?v=QOEMv0S8AcA
(83)Markus Sabadello, Video: The Three Visons , http://projectdanube.org/videos/video-the-three-visions/
(84)Personal Data Ecosystem Consortium £45 Empowering People with Their Personal Data, http://www.personaldataecosystem.org
(85)Project Danube £45 Publications, http://projectdanube.org/publications/
(86)Hal Abelson et al., ccREL: The Creative Commons Rights Expression Language, version 1,0, March 3rd, 2008, http://wiki.creativecommons.org/images/d/d6/Ccrel-1.0.pdf
(87)Bates, Adam et al., Towards Secure Provenance-Based Access Control in Cloud Environments, CODASPY ' 13, February 18-20, 2013, ACM, http://ix.cs.uoregon.edu/~amb/documents/Bates_Codaspy13.pdf
(88)Hassan Takabi, A Semantic Based Policy Management Framework for Cloud Computing Environments, Dissertation, University of Pittsburgh, 2013 http://d-scholarship.pitt.edu/19512/1/ETD.pdf
(89)Deliverable D37: Resilient Computing Curriculum, ReSIST: Resilience for Survivability in IST, A European Network of Excellence, December 2008, http://resist.isti.cnr.it/files/D37.pdf
(90)Deliverable D38: Resilent Computing Courseware, ReSIST: Resilience for Survivability in IST, A European Network of Excellence, May 2009, http://resist.isti.cnr.it/files/D38.pdf
(91)Short Message Service, 3rd Generation Partnership Project 2 £34;3GPP2£34;, http://www.3gpp2.org/public_html/specs/CS0015-0.pdf
(92)Sporny, Manu ed., HTTP Keys 1.0, Draft Community Group Specification, August 6th, 2013, https://payswarm.com/specs/source/http-keys/
(93)Hickson, Ian ed., HTML5 Web Messaging, W3C Candidate Recommendation, May 1st, 2012, http://www.w3.org/TR/webmessaging/
(94)George Sadowsky, ed., Accelerating Development Using the Web: Empowering Poor and Marginalized Populations, World Wide Web Foundation, May 2012, http://public.webfoundation.org/publications/accelerating-development/
(95)E. Wilde, URI Scheme for Global System for Mobile Communications (GSM) Short Message Service (SMS), Internet Engineering Task Force, RFC 5724, January 2010, http://www.ietf.org/rfc/rfc5724.txt
(96)Project: Voice Browsing, World Wide Web Foundation, January 2011 £45 June 2013, http://www.webfoundation.org/projects/voice-browsing/illiteracy/voice-browsing/
(97)HTML 5 £45 A vocabulary and associated APIs for HTML and XHTML, 5 Web Browsers, W3C Working Draft, http://www.w3.org/TR/2009/WD-html5-20090423/browsers.html
(98)User Agent, Wikipedia, http://en.wikipedia.org/wiki/User_agent
(99)Robert Tolksdorf et al., Selforganization in Distributed Semantic Repositories. FIS 2009, LNCS 6152, pp. 1-14, 2010, Springer-Verlag. http://link.springer.com/chapter/10.1007%2F978-3-642-14956-6_1
(100)David Gelernter, Generative Communcation in Linda, ACM Transactions on Programming Languages and Systems, Vol. 7, No. 1, January 1985, Pages 80-112, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.113.9679
(101)Sebastian Koske, Swarm Approaches For Semantic Triple Clustering And Retrieval In Distributed RDF Spaces, Masters Thesis, Freie Universitat Berlin, Fachbereich Mathematik Und Informatik, Feburary 2009,
http://www.mi.fu-berlin.de/inf/publications/techreports/tr2009/B-09-04/TR-B-09-04.pdf?1346662692
(102)Ronaldo Menezes and Robert Tolksdorf, A New Approach to Scaleable Linda-Systems based on Swarms (Extended Version). Technical Report CS-2003-04, Florida Institute of Technology, Department of Computer Sciences, 2003. https://repository.lib.fit.edu/bitstream/handle/11141/111/cs-2003-04.pdf?sequence=1
(103)Daniel Graff, Implementation and Evaluation of a SwarmLinda System, Technical Report TR-B-08-06, Freie Universitat Berlin, Department of Computer Science, Florida Institute of Technology, Department of Computer Science, http://www.inf.fu-berlin.de/inst/pubs/tr-b-08-06.abstract.html
(104)Franz Baader, Werner Nutt, Chapter 2, Basic Description Logics, Description Logic Handbook, pg. 50, http://www.inf.unibz.it/~franconi/dl/course/dlhb/dlhb-02.pdf
(105)Enrico Franconi, Description Logics £45 Tutorial Course Information, Free University of Bozen-Bolzano, Italy, http://www.inf.unibz.it/~franconi/dl/course/
(106)Kathrin Dentler et al., Semantic Web Reasoning by Swarm Intelligence, Department of Artificial Intelligence, Vrije Universiteit Amsterdam, The Netherlands, http://www.few.vu.nl/~kdr250/publications/Reasoning-by-Swarm-Intelligence.pdf
(107)Mark Elliott, Stigmergic Collaboration: The Evolution of Group Work, Vol. 9, Issue 2, May 2006, http://journal.media-culture.org.au/0605/03-elliott.php
(108)SPARQL 1.1 Overview, W3C Recommendation, W3C SPARQL Working Group, March 21, 2013, http://journal.media-culture.org.au/0605/03-elliott.php
(109)Huang Jiewen et al., Scaleable SPARQL Querying of Large RDF Graphs, 37th International Conference on Very Large Data Bases, Proceedings of the VLDB Endowment, Vol. 4, No. 11, 2011, http://www.cs.yale.edu/homes/dna/papers/sw-graph-scale.pdf
(110)Wan Fokkink, Process Algebra: An Algebraic Theory of Concurrency, Vrije Universiteit Amsterdam, Department of Theoretical Computer Science, The Netherlands, http://www.cs.vu.nl/~wanf/pubs/cai09-tutorial.pdf
(111)CCS, Wikipedia, http://en.wikipedia.org/wiki/Calculus_of_communicating_systems
(112)P. Ciancarini , R. Gorrieri , G. Zavattaro, Towards a Calculus for Generative Communication, 1996, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.4418
(113)C.A.R Hoare, Communicating Sequential Processes, http://www.usingcsp.com/cspbook.pdf
(114)CSP for MapReduce (what is the author???) ...also see the rationale behind CSP http://www.iist.unu.edu/www/docs/techreports/reports/report421.pdf
(115)R.J. Van Gleebeck, Notes on the Methodology in CCS and CSP, Centre for Mathematics and Computer Science, Amsterdam, http://www.cse.unsw.edu.au/~rvg/pub/methodology.pdf
(116)Raphael A. Finkel, Advanced Programming Language Design, Chapter 7, Concurrent Programming,Addison-Wesley pg. 220 £45 221, http://www.nondot.org/sabre/Mirrored/AdvProgLangDesign/finkel07.pdf {seeAlso: http://www.nondot.org/sabre/Mirrored/AdvProgLangDesign/}
(117)Robert Tolksdorf, Elena Paslaru Bontas, and Lyndon J.B. Nixon, Towards a Tuplespace-based middleware for the semantic web , Proc. of 2005 Int'l Conf. on Web Intelligence, Compiege, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.110.2876
(118)Enrico Franconi et al., An Intelligent Query Interface Based on Ontology Navigation, Workshop on Visual Interfaces to the Social and Semantic Web (VISSW2010) , Hong Kong, China. http://ceur-ws.org/Vol-565/paper3.pdf
(119)Wikipedia, Knowledge Base, https://en.wikipedia.org/wiki/Knowledge_base
(120)Martin O'Connor, Amar Das, SQWRL: a Query Language for OWL, Proceedings of OWL: Experiences and Directions (OWLED 2009), 2009, http://ceur-ws.org/Vol-529/owled2009_submission_42.pdf
(121)Follow Your Nose £45 W3C Wiki, Modified: August 25th, 2007, http://www.w3.org/wiki/FollowYourNose
(122)Tim Berners£45;Lee, Linked Data £45 Design Issues, Last Modified: June 18th, 2009, http://www.w3.org/DesignIssues/LinkedData.html
(123)BabelNet 2.0 £45 A very large multi-lingual encyclopedic dictionary and semantic network, http://babelnet.org/index.jsp
(124)Roberto Navigli and Simone Paolo Ponzetto, BabelNet: Building a Very Large Multilingual Semantic Network, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, pages 216 £45 225, http://classes.soe.ucsc.edu/cmps140/Winter11/papers/P10-1023.pdf
(125)Roberto Navigli and Simone Paolo Ponzetto, BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network, Artificial Intelligence, 193 (2012), pg. 217 - 250
http://wwwusers.di.uniroma1.it/~navigli/pubs/AIJ_2012_Navigli_Ponzetto.pdf
(126)Roberto Navigli, BabelNet goes to the (Multilingual) Semantic Web, Proceedings of the 3rd Workshop on the Multilingual Semantic Web, November 11, 2012, http://ceur-ws.org/Vol-936/paper1.pdf
(127)Wikipedia, Natural Language Generation, http://en.wikipedia.org/wiki/Natural_language_generation
(128)Francois Mairesse, Natural Language Generation £45 ART on Dialogue Models and Dialogue Systems, http://people.csail.mit.edu/francois/research/papers/ART-NLG.pdf
(129)Hongyan Jing, Usage of WordNet in Natural Language Generation, Department of Computer Science, Columbia University, http://acl.ldc.upenn.edu/W/W98/W98-0718.pdf
(130)Aba-Sah Dadzie and Matthew Rowe, Approaches to Visualizing Linked Data: A Survey, Semantic Web 1 (2011) 1-2, IOS Press, http://www.semantic-web-journal.net/sites/default/files/swj118_1.pdf
(131)Christian Bizer et al., A Browser-Independent Presentation Vocabulary for RDF, http://hal.inria.fr/docs/00/05/61/32/PDF/fresnel.pdf
(132)CytoScape: An Open Source Platform for Complex Network Analysis and Visualization, http://cytoscape.org/
(133)Gephi, an open source graph visualization and manipulation software, http://gephi.org/
(134)Visual Understanding Environment, Tufts University, http://vue.tufts.edu/
(135)Hugh Glaser and Ian C. Millard, RKBExplorer: Application and Infrastructure, School of Electronics and Computer Science, University of Southhampton, UK, http://ceur-ws.org/Vol-295/paper13.pdf
(136)Marbles Linked Data Engine, http://mes.github.io/marbles/
(137)Tabulator: Generic Data Browser, http://www.w3.org/2005/ajar/tab
(138)wiki.dbpedia.org:DBpedia Mobile, http://wiki.dbpedia.org/DBpediaMobile?v=hak
(138.5) RDF Gravity, http://semweb.salzburgresearch.at/apps/rdf-gravity/"
(139)PowerAqua, Knowledge Media Institute, The Open University, http://technologies.kmi.open.ac.uk/poweraqua
(140)Lopez, Vanessa Fernandez, Miriam Motta, Enrico and Stieler Nico (2011). PowerAqua: supporting users in querying and exploring the semantic web. Semantic Web , 3(3) pp. 249-265. http://dx.doi.org/doi:10.3233/SW-2011-0030
(141)Watson - Overview, Knowledge Media Institute, The Open University, http://watson.kmi.open.ac.uk/Overview.html
(141.5) YaCy - Decentralized Web Search, http://yacy.net
(141.75) IIlya Rudomilov, Prof. Ivan Jelinek, Semantic P2P Search Engine, Proceedings on the Federated Conference on Computer Science and Information Systems, pg. 991 - 995, http://fedcsis.eucip.pl/proceedings/pliks/237.pdf
(142)Robert Neumayer, Semantic and Distributed Entity Search in the Web of Data, Doctorate Thesis, Norwegian Univeristy of Science and Technology, October 2012, http://www.idi.ntnu.no/~neumayer/pubs/NEU13_thesis.pdf
(143)Miriam Fern£224;ndez S£224;nchez, Semantically Enhanced Information Retrieval: An Ontology£45;Based Approach, Dissertation, Escula Polit£232;cnica Superior, Departamento de Ingenier£236;a Inform£224ca, January 2009, http://nets.ii.uam.es/miriam/thesis.pdf
(144)Michele Boldrin and David K. Levine, The Case Against Patents, Federal Reserve Bank of St. Louis, Research Division, September 2012, http://research.stlouisfed.org/wp/2012/2012-035.pdf
(145)Cambia, http://www.cambia.org/daisy/cambia/home.html
(146)The Lens, http://www.lens.org/lens/
(147)Richard Jefferson, Science as Social Enterprise £45 The CAMBIA BiOS Initiative, Innovations: Technology, Governance, Globalization Fall 2006, Vol. 1, No. 4: 13-44. http://www.cambia.org/daisy/cambia/3123/version/default/part/AttachmentData/data/%282007%29%20Science%20As%20Social%20Enterprise%20-%20Inovations.pdf
(148)Richard A. Epstien, Richard Ponser Gets It Wrong, Defining Ideas: A Hoover Institution Journal, July 31, 2012 , http://www.hoover.org/publications/defining-ideas/article/123926
(149)Richard A. Posner, Why Are There Too Many Patents In America, July 12, 2012, http://www.theatlantic.com/business/archive/2012/07/why-there-are-too-many-patents-in-america/259725/
(150)Charles Duhigg, How Rick Warren Harnessed the Power of Social Habits, Christianity Today, August 10, 2012, http://www.christianitytoday.com/ct/2012/august-web-only/how-rick-warren-harnessed-power-of-social-habits.html
(151)Hacker Ethic, Wikipedia, http://en.wikipedia.org/wiki/Hacker_ethic
(152)Scott Thumma, Exploring the MegaChurch Phenomena: Their characteristics and social context, Hartford Institute for Religious Research, http://hirr.hartsem.edu/bookshelf/thumma_article2.html
(153)Rivalry (economics), Wikipedia, http://en.wikipedia.org/wiki/Rivalry_%28economics%29
(154)Excludability, Wikipedia, http://en.wikipedia.org/wiki/Non-excludable_good
(155)Elinor Ostrom, Collective Action and the Commons: What Have We Learned?, September 2009, http://www.cornell.edu/video/elinor-ostrom-collective-action-and-the-commons
(156)Presentation by: Todd Garrett and Doug Englebart to Management System Global Leadership Team, Procter and Gamble, February 26th, 1998, https://archive.org/details/XD1757_98MangmtSystemsGlobleadership
(157)Rhodes, Richard, £34;The Making of the Atomic Bomb£34;, Simon and Schuster, New York, NY, 1986, pg. 676
(158)Kevin A. Carson, The Homebrew Industrial Revolution : A Low-Overhead Manifesto, Center for a Stateless Society, Chapter 5, http://www.mutualist.org/sitebuildercontent/sitebuilderfiles/ChapterFive.pdf(Chapter 5)
https://www.academia.edu/1605402/The_Homebrew_Industrial_Revolution (full)
(159)How to Make (almost) Anything, MAS.863/4.140, Massachussets Institute of Technology, http://fab.cba.mit.edu/classes/863.13/
(160)Zach Hoeken Smith, Introducting BotQue: Open Distributed Manufacturing, www.hoektronics.com/2012/09/13/introducing-botqueue-open-distributed-manufacturing/
(161)Euromax Terminal Rotterdam | ECT, http://www.ect.nl/en/content/euromax-terminal-rotterdam
(162)Case Studies £45 Euromax Terminal Rotterdam, Certus Port Automation, http://www.certusportautomation.com/cases/caseitem/2-euromax-terminal-rotterdam.html
(163)Robert A. Freitas, Jr., William P. Gilbreath Ed., NASA's Advanced Automation for Space Missions, NASA Conference Publication 2255, National Aeronautics and Space Administration, Scientific and Technical Information Branch, 1982, http://www.islandone.org/MMSG/aasm/, Updated: June 25, 1999
(164)The Center for Bits and Atoms, http://cba.mit.edu
(165)Fab Lab FAQ, http://fab.cba.mit.edu/about/faq/
(166)Peter R. Orzag, Open Government Directive, Memorandum for the heads of executive departments and agencies, M£45;10£45;06, The Whitehouse, December 8, 2009, http://www.whitehouse.gov/sites/default/files/omb/assets/memoranda_2010/m10-06.pd
(167)Sylvia M. Burwell et al., Open Data Policy £45 Managing Information as an Asset, Memorandum for the heads of executive departments and agencies, M£45;13£45;13, The Whitehouse, May 9, 2013, http://www.whitehouse.gov/sites/default/files/omb/memoranda/2013/m-13-13.pdf
(168)Project Open Data, Open Data Policy £45 Managing Information as an Asset, http://project-open-data.github.io/
(169) Free Art License 1.3, http://artlibre.org/lal/en
(170) CERN Open Hardware Licence, http://www.ohwr.org/projects/cernohl/wiki
(171) The Solderpad Hardware License, http://solderpad.org/licenses/
(172) The TAPR Open Hardware License, http://www.tapr.org/ohl.html
(173) Javier Serrano, [Discuss] hardware != patentability, Friday, May 23rd, 2014, 3:34 AM, http://lists.oshwa.org/pipermail/discuss/2014-May/001093.html
(174) Big Data analytics in oil and gas, March 26, 2014, http://www.bain.com/publications/articles/big-data-analytics-in-oil-and-gas.aspx
(175) Tim Berners-Lee, Information Management: A Proposal, CERN, March 1989, May 1990, http://www.w3.org/History/1989/proposal.html
(176) Tim Berners-Lee, Weaving the Web : the original design and ultimate destiny of the World Wide Web by its inventor, 1999, San Francisco : HarperSanFrancisco
(177) Taru Balog, Open For Business: The Importance of trademarks, even for an open source business, July 12, 2011,