By Patrick Carmichael, University of Reading, UK


This article reviews current technological developments, particularly Peer-to-Peer technologies and Distributed Data Systems, and their value to community memory projects, particularly those concerned with the preservation of the cultural, literary and administrative data of cultures which have suffered genocide or are at risk of genocide. It draws attention to the comparatively good representation online of genocide denial groups and changes in the technological strategies of holocaust denial and other far-right groups. It draws on the author’s work in providing IT support for a UK-based Non-Governmental Organization providing support for survivors of genocide in Rwanda.


Recent technological advances offer opportunities for victims of genocide, ethnic cleansing, or other enacted hate to bear witness under the gaze of a global audience on the Internet. Some allow those who are suffering under genocidal regimes to make their voices heard; others, the collation of fragmentary evidence from remnant populations and diasporic communities into secure, authoritative and enduring bodies of testimony. My own interest in this area stems from my experience of collating the testimonies of survivors of genocide in Rwanda on behalf of Survivors’ Fund, a UK-based Charity working closely with the Association des Veuves du Genocide d’Avril (AVEGA) in Rwanda. In 1994, an estimated 800,000 people (about 11% of the population) were killed in an organized genocide. Far from being the result of spontaneous intertribal strife, this was organized by senior government figures and coordinated by civil servants and local officials. The media, particularly the notorious Radio Television Libre de Mille Collines, were instrumental in directing the genocide. While the majority of the victims of the genocide were ethnic Tutsi, victims also included ethnic Hutu opposed to the government, those suspected of collusion with the Rwandan Patriotic Front (who occupied the north-western parts of the country but had signed a peace accord with the government) and ‘intellectuals.’ The National University of Rwanda at Butare was subject to coordinated attacks with the majority of staff and students on campus being murdered regardless of ethnic origin (Prunier, 1995). A policy of organized rape enacted during the genocide has left many women HIV-positive (Human Rights Watch, 1996) and their poor prognosis has spurred many to provide testimonies to AVEGA and Survivors’ Fund detailing their experiences in 1994 and subsequently.

What was initially a purely technical assessment developed rapidly into involved a much broader study of the political implications of collating, preserving and publishing such data. One outcome of this was a review of the complex relationship between media development and security in post-Genocide Rwanda (Carmichael, 2002); another is this current review of network technologies and the implications of their deployment for information providers, which discusses a range of strategies that might be employed in the development of secure and enduring networked resources by individuals or groups concerned to preserve community resources including (but not limited to) testimonies of the survivors of genocide. The necessity of using what may appear to be complex technical solutions, involving cryptography, ‘secret sharing’ of documents and other security measures, is a reflection of the fact that any individual or organization deciding to become a provider of information on the global network exposes themselves to scrutiny and even attack. When the information provider role relates to genocide and is developed against a background of continuing genocidal activity, or where apologists or deniers of genocide are also politically active and technologically capable, the providers may find themselves embroiled in (or even provoking) what has been termed ‘netwar.’ As defined by Arquilla and Ronfeldt (1997a) this typically involves individuals and non-governmental or paramilitary organizations using freely available technologies and is distinguished from ‘cyberwar,’ which is associated with ‘high-intensity’ or ‘medium-range’ conflict involving formal military forces pitted against each other and using high-technology weapons and techniques. Harknett (1996) characterizes ‘netwar’ as focusing on ‘societal connectivity’ (rather than military capability) which can be attacked, disrupted, or destroyed on three different levels: the personal, the institutional, and the national.

While deniers of, and apologists for, the genocide in Rwanda are as yet poorly-represented online, lack of public awareness about the causes and conduct of genocide in Rwanda outside the region means that their influence may be disproportionately great (Carmichael, 2002). When we look online, we find that small and emerging nation-states, those whose existence or borders are contested, and networked organizations without territorial holdings are able to ‘boost’ their profile and exert disproportionate levels of influence (Brunn & Cottle, 1997; Whine, 1999). This raised a set of concerns about the potential power and influence of groups who might at some point in the future seek either to deny the existence of the Rwandan Genocide, and the potential threats to the security and integrity of any electronic archive which we might construct. This led to an examination of the evolving conflict being enacted across the Internet between groups concerned to commemorate the European holocaust on the one hand and holocaust denial and other far-right groups on the other. What emerged from this analysis was to change qualitatively the terms in which my colleagues and I thought about the process of ‘taking community memory online.’

The Internet, Mass Communication and the Holocaust

The Internet differs from more traditional mass communication systems such as radio, television and print media in a number of respects, some of which have contributed to its enthusiastic and effective use by extremist groups, including holocaust deniers. Firstly, it is comparatively inexpensive to use and the development of Web-based services means that ownership of a computer or membership in a networked organization is no longer a prerequisite for the establishment and maintenance of an online presence such as a Web site.

Secondly, the ease with which content may be produced and disseminated makes it possible for individuals and organizations to ‘punch above their weight,’ producing and publishing Web sites with apparently greater authority and with a potentially far larger audience than would otherwise be the case (Whine, 1999). Networked resources can easily be mirrored to other locations or hyperlinked from other Web sites, allowing users to encounter what may appear to be authoritative materials, but without any political, historical or editorial context and without any kind of information as to the status of documents or their authors.

Thirdly, the imposition of legal norms across the Internet has, to date, proved very difficult and there is little international agreement about the responsibilities of ‘carriers’ including ISPs (Internet Service Providers). Even under the US Communications Decency Act of 1996 (subsequently overturned in 1998 by the Supreme Court on the grounds of its being ‘over broad’), ISPs were not held responsible, in a broad range of circumstances, for the use of their facilities to publish content considered defamatory. Differences in the conceptions of ‘free speech,’ about what constitutes ‘political propaganda,’ and about the rights and responsibilities of telecommunications companies and Internet Service Providers, makes application of international legislation such as the 1948 UN Convention on Genocide difficult, even when supported by national laws.

A final feature is a strong libertarian tendency of the architects of the Internet and developers of networking software who have resisted efforts to regulate the network, undermined intellectual property law and challenged actions that they have perceived as ‘censorship.’ In the often-cited words of John Gilmore of the Electronic Frontier Foundation, ‘the net interprets censorship as damage and routes round it.’ 1 This commitment to ‘free speech’ has allowed Holocaust deniers to publish material on the Internet with impunity and to argue that by doing so they are upholding democratic principles.

A case in point is Ernst  Zundel and Ingrid Rimland’s ‘Zundelsite’2, a central resource for holocaust ‘revisionists’ which offers users online versions of a wide range of documents including Harwood’s Did Six Million Really Die? 3 (1975). Each page of the site includes a footnote reading:

The concepts expressed in this document are protected by the basic human right to freedom of speech, as guaranteed by the First Amendment of the Constitution of the United States, reaffirmed by the U.S. Supreme Court as applying to the Internet content on June 26, 1997

This statement, a reference to the Supreme Court’s 1997 ruling that attempts to regulate Internet content under the Communications Decency Act were unconstitutional,4 is accompanied by an image of the ‘blue ribbon’ emblem of the Electronic Frontier Foundation (EFF) ‘Free Speech Online’ campaign. The site’s homepage includes the statement ‘Remember: whenever you see the blue ribbon … the Zundelsite was at the center’.5 The existence of Zundel mirror sites (complete replicas of the original) around the Internet is a further indication of its operators’ skill in exploiting features of the global network. Not only have they aligned themselves with campaigns such as those organized by the EFF, they are also aware of how network technologies can be used to protect Web site content against future legislation or ‘netwar’ activity through multiple replication of data, and offer users a ‘Zundelsite kit’ which allows easy construction of a mirror site.

The Internet and “Leaderless Resistance”

A number of agencies provide both critical analysis of the online publications of Zundel and other Holocaust deniers, and theoretical frameworks for the study of online hate in general6. However, the trend among far-right groups identified by David Goldman (Hatewatch, 2000) away from what he describes as extremist ‘outreach’ (attempting to reach a mass audience with ‘brochure’ Web sites) towards ‘netwar’ activities, may mean that providing counter-information and educational materials may prove an inadequate response to their activity. Extremist groups have used the Internet to coordinate and perpetrate sustained and sophisticated campaigns of harassment and disinformation against organizations and individuals, as in the case of former Pennsylvania housing advocate Bonnie Jouhari 7. Goldman’s contention is supported by a review of extremist sites which publish information about resources capable of being used to effect a wide range of network attacks, both those designed to deny legitimate users access (DOS or ‘Denial of Service’ attacks) and to actually destroy resources.8 Once again, high levels of technological capability and planning are evident, with security resources designed to prevent attacks being featured alongside ‘cracking’ tools useful in their perpetration.

The existence of organizational and individual Web sites which encourage visitors to initiate ‘netwar’ activities against real and perceived opponents, while also providing them with the knowledge and resources necessary to do so, can be seen as a reflection of the doctrine of ‘Leaderless Resistance’ developed by ‘Aryan Nations’ leader Louis Beam, which states that

… participants in a program of Leaderless Resistance through ‘phantom cell’ or individual action must know exactly what they are doing and how to do it. It becomes the responsibility of the individual to acquire the necessary skills and information as to what is to be done … Organs of information distribution such as newspapers, leaflets, computers etc which are widely available to all, keep each person informed of events, allowing for a planned response that will take many variations. (Beam, 1992)

Some of the individuals active on the extreme right are also holocaust deniers9 and there is clearly sharing of resources and expertise between groups specifically concerned with ‘revisionism’ and those extremists with broader political agendas10.

In addition, high profile legal cases against holocaust deniers such as Ernst Zundel have been interpreted by a range of political groups as examples of governmental interference and restriction of individual freedoms. A result of this is the alignment of groups without obvious interests in holocaust denial (such as anti-federalist ‘Patriot’ organizations and the anti-environmentalist ‘Wise Use’ movement) with more overt deniers, with the Internet providing a context within which expertise and resources can be exchanged and new organizational forms can develop. The ‘Leaderless Resistance’ model of organization allows for, and even encourages, a symbiosis between information providers like Beam and Zundel, and small groups or individuals who take their ‘direction’ from their sites, gather resources from others and who may perpetrate network or real attacks on the basis of public-domain information about political opponents.

Network Attacks and Defensive Strategies

Network attacks that might be launched against Web sites or other Web resources by deniers or apologists for genocide, operating either as part of concerted campaigns or within a ‘leaderless resistance’ framework, might, then, take many forms. These include malicious ‘cracking’ of Web server security in order to alter or destroy content, email ‘flooding’ and the deployment of hostile viruses, but more significantly Denial of Service (DOS) attacks. These do not involve breaching the target network or Web site but rather overloading them with so much traffic that they become unable to cope, and legitimate users then find themselves unable to gain access to services and resources. Earlier ‘brute-force’ DOS attacks have been supplanted by Distributed DOS (DDOS) attacks which generate traffic from multiple points on the Internet, the identities of which are concealed or ‘spoofed’ (Northcutt & Novak, 2001, pp. 251-254). This causes the target to be unable to complete transactions and makes identification of the sources of the attacks even more difficult:

To a victim, an attack may appear to come from many different source addresses, whether or not IP source address spoofing is employed by the attacker. Responding to a distributed attack requires a high degree of communication between Internet sites. Prevention is not straight forward because of the interdependency of site security on the Internet; the tools are typically installed on compromised systems that are outside of the administrative control of eventual denial of service attack targets. (CERT, 1999, Solutions section, para. 1)

While DDOS attacks are effective because they exploit the structure of the Internet, another feature of the Internet, its rapid growth, has increased the number of insecure network locations from which such attacks may be launched:

Currently, there are tens of thousands – perhaps even millions – of systems with weak security connected to the Internet. Attackers are (and will) compromising these machines and building attack networks. Attack technology takes advantage of the power of the Internet to exploit its own weaknesses and overcome defenses. (SANS, 2000; Key Trends and Factors section, para. 3)

These factors, combined with the ready availability of the software components necessary to launch a DDOS attack, make it comparatively easy for individuals or groups to institute what Arquilla and Ronfeldt (1997b) have termed ‘swarm’ attacks against even the largest institutional networks or Web sites. The advent of Internet-enabled portable devices such as Portable Digital Assistants and WAP-enabled phones that are not exclusively identifiable with any single individual also has enormous implications for the security of networks and networked resources, as they offer the potential for mobile and effectively anonymous attacks. DDOS attacks organized in this way are very much in keeping with the idea of ‘leaderless resistance’ in that perpetrators are difficult to identify and are unknown both to each other and to the providers of the resources used to mount the attacks.

With extremist groups and holocaust deniers well represented online and becoming capable of waging ‘netwar’, those organizations devoted to challenging holocaust denial, raising awareness of genocide or to preserving testimonies and other resources as part of ‘community memory’ projects may find their resources under network attack, so it is imperative that security measures are put in place. The dilemma for any organization wishing to maintain a constant and secure Internet presence which may be subject to unexpected and diverse network attacks is whether to concentrate on developing a well-defended central resource or ‘bastion host’ over which they have complete administrative control and which is optimized for detection and response to possible incursions or DOS attacks; to build a distributed network of sites offering less security on an individual basis but a more attack-proof network presence overall; or to adopt a more radical approach involving dispersed data storage.

The remainder of this essay will review different these different strategies, but ahead of this review it is worth noting that two paradoxes will become apparent. The first is that many of the technologies discussed can be used as easily by holocaust deniers to protect their online resources as by those involved in the development of community memory projects; and the second is that many of the technologies identified are at least as likely to be used to protect privacy and anonymity (of individuals, organization or documents) as they are to be used to place data in the public domain. It is for this reason that I am cautious about identifying specific software products or services as ‘solutions’; rather, there is a need to borrow and develop a range of tools and techniques in order to develop an ‘information strategy’ consistent with existing ‘real world’ policy and practice.

Bastions in the Real and Virtual World: Forts and Firewalls

Despite the fact that the ubiquity of the global network makes the physical location of resources less critical, the issue of where servers should be located remains important, particularly in regions prone to war, civil unrest or natural disasters. In Rwanda, as in many Less Economically Developed Countries, much secure data tends to be held outside the country11 as network infrastructures are still fragile and the physical security of the small number of available Web servers may be difficult to guarantee. One approach, then, is to locate resources in secure locations: already, secure servers used for the processing and storage of financial data tend to be located either in custom-built facilities or in redundant military installations.

A development of the secure storage facility is the concept of the ‘Data Haven,’ popularized by Neal Stephenson’s novel Cryptonomicon, in which a group of network engineers attempt to establish a safe haven for data in a politically stable location in the Pacific Rim. While some of their number have motives related to using the haven for genocide ‘education and avoidance,’12 they find that the Data Haven is also of interest to government and international criminal organizations that wish to take advantage of the security and privacy it provides. Currently, the implementation of a Data Haven closest to Stephenson’s vision is the facility operated by HavenCo, who offer ‘a secure managed collocation business with the added advantage that the customers’ data will also be physically secure against any legal action.’13 This physical security is provided by the location of the facility on ‘Rough’s Tower,’ a Second World War anti-aircraft fort six miles off the East Coast of the United Kingdom and occupied since 1967 by the Bates family, who have declared it to be the independent principality of ‘Sealand’.14 Garfinkel (2000) reports that, in addition to providing secure and private facilities for paying individuals and organizations, HavenCo plan to offer Web services and data storage facilities to the government-in-exile of Tibet, although at the time of this writing it is unclear whether this has been established. There also exist a number of Internet Service Providers (ISP’s) which describe themselves as ‘data havens’ although in most cases they are differentiated from their competitors only by holding lesser quantities of personal data on their customers, by the level of electronic security they provide and by operating ‘no questions asked’ policies on Web site content and client activities.15

Most information providers are not in a position to set up fortified resources of this kind, and communities under threat of ‘ethnic cleansing’ or genocide are even less likely to be in a position to establish physically secure locations. What is more likely is that they establish purely electronic security measures to protect resources, or select a remote ‘host’ for their data which has such measures in place. ‘Firewalls’ are specialized software applications commonly located on a computer between the data to be protected and the Internet, shielding it from outside interference. Wack and Carnahan (1995) distinguish between the purely technological ‘firewall system’ and the broader notion of a ‘firewall policy,’ by which they mean an approach to security implementing a broader policy while Zwicky, Cooper and Chapman (2000) more fully explore the idea of a firewall as something that is constructed in response to specific needs, as an expression of a security policy.

Even within Web resources, it is possible to ‘screen’ important data, preventing direct access from the Internet. In this respect the NGO sector in particular can learn from commercial Internet information providers such as national and international news services. These characteristically use an ‘n-tiered’ data architecture with requests for data from users being mediated through a ‘middle tier’ at which stage authentication and authorization takes place. The physical location of data is concealed from the end-user, and it may even reside on a different server or servers from those where the ‘middle tier’ is located. What the user sees is a ‘snapshot’ of the data, generally presented as a dynamically generated or ‘on-the-fly’ Web page on the user tier. This approach also has the benefit of being ‘scalable:’ resources can be added to the secure and screened data tier without the need to constantly redevelop the user tier. The separation of data from formatting and presentation has the added benefit of ‘future-proofing’ data: as long as it conforms to some standard data description system it will remain accessible to combinations of user-tier hardware and software, which develop in the future. This is the rationale behind the adoption of XML (extensible Markup Language)16, which allows data to be described using a system of machine-readable (and, in the last resort, human-readable) ‘tags’ and delivered to whatever user-tier device or software is currently in use.

Mirrors, Peer-to-Peer Networks and ‘Tactical Media’

The construction of ‘bastion hosts’ may not be simply beyond the technological capabilities of many users. It may also be strategically inappropriate, and particularly so for communities under threat of genocide or seeking to document its enaction. To build a single networked resource located in a specific physical location merely replicates traditional, hierarchical forms of media, which are prone to capture, destruction or decapitation (Harknett, 1996). The whole point of the original plans that led to the development of the Internet was to create a communications system without a center: the simplest way in which users or communities of users can take advantage of this is to use mirroring.

Mirroring involves the replication of single documents or whole Web sites across the global network, allowing users to choose a mirror on the basis of global and local patterns of access and speed of network traffic. In the event of a single network component being unavailable or overly busy, another may be selected either automatically or at the instigation of the user. One of the most celebrated uses of mirroring involved the Belgrade-based B92 radio station, which was able to continue broadcasting over the Internet even though the government in Belgrade, of which they were critical, jammed their radio signals (Collin, 2001). As long as they could get recordings of broadcasts, stored as ‘RealAudio’ sound files, to their main Internet Service Provider in the Netherlands or any one of their ten mirror Web sites by FTP (file transfer protocol) or e-mail, they could be duplicated and made available to audiences both in Yugoslavia and globally. Ultimately, a dissemination strategy conceived in response to lack of access to traditional forms of media exposed B92 to a far larger audience than would otherwise have been the case and raised their international profile significantly.

In the context of the construction and preservation of community memory resources, mirroring provides a basic level of protection against physical and network attacks; since each of the host systems on which the content is mirrored may be configured differently, a single network attack is unlikely to succeed against all of them; at the same time, the originators of the content are dependent on the continuing goodwill of the mirror owners and on the integrity of their systems. Additionally, since the content of mirror sites is characteristically open to public scrutiny (the purpose of most mirrors, after all, is to provide easier, faster or cheaper public access to resources17, some potential mirrors may be unwilling to play host to material which they know may provoke network attacks capable of causing damage to their systems as a whole or ‘denial of service’ to their users.

A more radical approach than mirroring involves the establishment of decentralized ‘peer-to-peer’ (P2P) networks, across which distinctions between ‘client’ and ‘server,’ and between ‘user,’ ‘author’ and ‘provider’ are blurred. Kan (2001) describes Gnutella, one of the leading peer-to-peer technologies, in the following terms:

Standard network applications comprise three distinct modules. There is the server, which is where you deposit all the intelligence … there is the client, which typically renders the result of some action on the server for viewing by the user … and the network, which is the conduit that connects the client and the server. Gnutella blends all that into one. The client is the server is the network. [It] is effectively a software-based infrastructure that comes and goes with its users.

Gnutella’s implementation involves the establishment of a decentralized network of ‘nodes’ on which resources may be stored. Any computer connected to the Internet and capable of running the appropriate software can act as a node, with no requirement for these nodes to be continuously available. When a user connects to a Gnutella network, their software detects other available nodes and queries them to discover what resources they contain. Such systems have, to date, been used primarily for sharing material such as MP3 audio files across the network, but they could prove useful in the establishment of temporary peer-to-peer networks, running over the telephone system in situations where communities are deprived of access to traditional media or centralized Internet services.

Markovic (1998) describes the use of networks of mirror sites or of peer-to-peer network nodes in order to establish basic communication capabilities as the ‘tactical media,’ a term which captures the notion of temporary solutions driven by expediency and making use of available facilities, rather than as part of a long-term strategy of media or resource development. While such strategies might prove useful for communities under threat of genocide who wish to alert third parties to their situation, or who wish to preserve information, the often transient nature of peer-to-peer networks reduces their long-term potential for the development of community memory resources.

Some peer-to-peer systems do provide more permanent storage of data. Freenet, (described as an ‘adaptive network’) is made up of nodes which store not only local data but also a database listing the data held on at least some of the other hosts forming part of the network. When a host receives a request for a document that it does not hold, it ‘forwards’ the request to another host and this process continues until the data is located and passed back to the user via the referring hosts. Since each of the hosts can ‘cache’ the data, adding it to its local store, this means that data is capable of being replicated across the wider Freenet network (Clark, 1999). A related project, ‘Red Rover’ (Brown, 2001) involves ‘subscribers’ accessing information via intermediary ‘volunteer hosts’ and Web-based email services; this approach is specifically designed to frustrate surveillance of Internet use and to circumvent censorship measures.18

‘Eternity Services’ and Distributed Data

A final group of network architectures known as ‘eternity services’ are often categorized together with peer-to-peer technologies, perhaps because of their shared concern with privacy and autonomy. They differ, however, in some key respects, and their developers emphasize strategic security and permanence of resources rather than the ‘tactical’ ad-hoc patterns offered by true peer-to-peer systems. Wiley characterizes eternity services as being ‘publisher-centric’ and concerned with preservation of data, rather than being ‘reader-centric’ and driven by audience demands (Wiley, 2001) Anderson (1996) described a model ‘Eternity Service’ and a number of implementations have been developed subsequently. Four of these, ‘Free Haven,’ ‘Publius,’ ‘Intermemory,’ and ‘OceanStore’ will be considered here.

Eternity services, as conceived by Anderson, purposively replicate data (unlike Freenet’s user-driven replication and caching) and then scatter it across large numbers of network locations; the security of such ‘persistent data’ is based on this multiple redundancy and wide dispersal, which make Denial of Service and other attacks difficult to organize and perpetrate. Further security may be provided by encryption of documents and by making anonymous key processes such as the initial uploading of documents prior to their dispersal.

Free Haven is primarily concerned with the provision of secure, anonymous and ‘low-profile’ archive facilities rather than with providing resources for frequent public access. Documents are split into shares and stored on a number of servers, each of which provides space in return for access to the other servers across the ‘servnet’ (Dingledine, Freedman & Molnar, 2001). The revocation of documents and their removal from the servnet occurs as and when specified by their owners, rather than on the basis of the frequency of their being requested by users. Publius is similar in that it presupposes the availability of a directory of contributing servers across which fragments of data are distributed using a ‘secret sharing algorithm.’ This breaks any document up into fragments, which are then replicated, encrypted and distributed across the network; a specified number of fragments are then required to reassemble and decrypt the original. Destruction or alteration of the document is, as a result, very difficult to achieve, making the system potentially very useful for the preservation of significant historical, constitutional or legal documentation. Publius has already been released into the public domain and the tools necessary to establish either a Publius server or to run a Publius client are freely available (Waldman, Rubin & Cranor, 2000).

Concern with the maintenance of data security and author anonymity has led the developers of Free Haven and Publius to consider the range of potential attacks that might be launched against their server networks. The developers of Free Haven justify their approach:

In analysing systems like Free Haven, it is not enough to look at every day, plausible scenarios – every effort must be made to provide security against adversaries more powerful than the designers ever expect, because in real life, adversaries have a way of being more powerful than anyone ever expects. (Dingledine, Freedman & Molnar, 2001, p. 165)

What also emerges from descriptions of Publius and Free Haven, however, is the same paradox as has previously been discussed in relation to other patterns of Internet publishing. The opportunity to ‘route around’ or otherwise confound attempts at censorship by use of these systems may make them as attractive to far-right groups and holocaust deniers as to those seeking to use it as the basis of community memory projects. The developers of Free Haven list the range of ‘attacks’ they anticipate might be launched against them: these include ‘legal’ and ‘social’ threats as well as network attacks using purely technological means (Dingledine, Freedman & Molnar, 2001, p. 177-181). Waldman, Cranor and Rubin (2001, p. 261), in a review of ‘trust’ issues in the context of Publius deployment, are more specific:

Attackers may use intellectual property law, obscenity laws, hate speech laws or other laws to try and force server operators to remove Publius documents from their servers or to shut down their servers completely … by placing Publius servers in many different jurisdictions, such attacks can be prevented to some extent.

The fact that ‘hate speech laws’ are perceived as a potential threat to the privacy of Publius authors is paradoxical (and not a little ironic), particularly given that this paper is concerned with defense of resources against attacks by groups and individuals more usually seen as the organizers and perpetrators of enacted hate.

The final applications to be considered here, Intermemory and OceanStore, differ from Free Haven and Publius in that their developers emphasize global access to data rather than anonymity and privacy. While security issues are addressed through multiple replication and encryption of data, the primary concern is the preservation of data integrity and availability rather than author or publisher anonymity. Intermemory19 comprises a network of contributing servers across which data is automatically replicated and dispersed (Goldberg & Yianilos, 1998). Each ‘donation’ of server space increases the size of the Intermemory and entitles the donor to store their own data on the network and, as in applications described previously, users then submit requests for data to the Intermemory as a whole. OceanStore20 has a similar infrastructure comprised of untrusted servers, data [being] protected by redundancy and cryptographic technique (Kubiatowicz et al., 2000). The developers of OceanStore envisage providing ‘secure’ and ‘durable’ data distributed across ‘data pools’ and available to ‘groupware’ applications and across large-scale collaborative scientific projects involving large datasets. The replication of data means that servers do not have to be continually online and an assumption underlying the OceanStore architecture is that, while users may access data pools only intermittently, data ‘persists’ and is available for an indefinite period.

While communities under threat of genocide may find ‘tactical media’ solutions appropriate and effective in the short term, the selection of a persistent data system represents a more long-term, strategic approach to data preservation than that offered by standard client-server or peer-to-peer technologies. Applications after the model of Intermemory or OceanStore appear to offer the greatest opportunity for community memory projects to secure their data, whether these are comprised solely of survivors’ testimonies or include other resources such as cultural, language and educational resources. The wide and persistent storage of data means that such resources would effectively be as secure as the global network itself and would only be susceptible to the damage on a scale capable of destroying the network in its entirety.

Conclusion and Reflections

Collections of genocide survivor testimonies, the application that stimulated my own work in the area of data security, is only one application of the technologies described above. Genocide often targets not only people and communities but also aspects of the infrastructure, constitutional, religious, legal, educational, artistic and social, that reflects and sustains their existence. This may be concerned largely with destroying records of residence and ownership; but at other times, documentation itself symbolic of community or national existence is the target.

The documentation that accompanies and exemplifies these are as much in need of preservation as descriptions of the acts of genocide that accompanied their destruction and that of the people to whom they had meaning. Even mundane records, of tenancy and ownership, of educational enrollment and certification, and of births, marriages and deaths, become significant in the aftermath of ethnic cleansing and genocide, and the secure storage of such data, while not necessarily acting as a deterrent to action, may allow more effective reparations and the reconstruction of civil society structures. In those communities where diasporas exist, network access may not only allow ‘tactical media’ structures to be established in times of danger, but also allows a longer-term collective responsibility to be taken for the archiving of culturally significant data.

In Rwanda itself, there has been wide-ranging and politically-charged discussion as to what constitutes the most appropriate form of memorial to victims of genocide21, echoing the continuing debates over the nature of both the German and Austrian Holocaust Memorials.22 The Shoah Survivors’ Visual History Foundation,23 with its extensive collection of documents and over 50000 video testimonies, represents a different kind of memorial and, while it uses the Internet and other media as means of dissemination and education, it is fundamentally a centralized resource on the model of an ‘online museum.’ A truly networked solution capable of preserving memories and records of individuals, communities and cultures in perpetuity, and comprising an integral part of the global communications network, might be used as the basis not only of educational resources or data repositories but of truly global community memory. A true ‘eternity service’ might serve more than one purpose, acting as archive, memorial, educational resource and – perhaps – even as a reminder that it is impossible to expunge peoples and their cultures from our collective human consciousness.


1. Gilmore’s comment is cited in a number of publications from about 1996 onwards but appears to have been in circulation in private emails, interviews and newsgroup submissions from about the end of 1993 onwards. See Gilmore’s own account at
2. The site is also mirrored at a number of other Web addresses, so that entering ‘Zundelsite’ into an Internet search engine will guarantee to direct the user to either the main site or a recent mirror of it.
3. Harwood, R. [Richard Verrall] (1975). Did Six Million Really Die? Richmond, VA: Historical Review Press.
4. American Civil Liberties Union v. Reno, 929 F.Supp. 824 (E.D. Pa. 1996) (affirmed, Reno v. American Civil Liberties Union, 521 U.S. 844, 117 S.Ct. 2329, 1997).
5. This interpretation – of Zundel playing a pivotal role in the campaigns of the EFF – is not shared by the EFF themselves: there is little or no mention of Zundel or his Web site on the EFF Web site.
6. These include the Nizkhor Project (, which ‘offers an expose of common arguments – and techniques of argumentation – employed by Holocaust deniers’ and the Anti-Defamation League ( More recently the Freilich Foundation ( has begun to coordinate research in this area and hosted a conference ‘Cyberhate: Bigotry and Prejudice on the Internet’ at the Australian National University in November 2000.
7. Jouhari was targeted in 1998 by the Ku Klux Klan and by ‘Alpha HQ,’ a Philadelphia neo-Nazi group. This group’s Web site carried Jouhari’s picture, labelled her a ‘race traitor’ and threatened to lynch traitors ‘from the nearest tree or lamp post.’ ‘Spoofed’ postings purporting to be suicide notes from Jouhari in which she claimed that she wanted to “blow her brains out for being a race traitor” were subsequently made to over eighty Usenet newsgroups. While the Web site was ordered to be removed by a Pennsylvania state court, the task of removing the Usenet postings proved more difficult (Miller, 2000). In 2000 a Federal Court found against Ryan Wilson of Alpha HQ in a case brought by the US Department of Housing and Urban Development.
8. Goldman’s original (2000) review has now been replaced online by a selected list of ‘hate sites’ at The original described the features and facilities offered by sites including Hatecore and Hammerskins, both of which were offline by November 2002.
9. See Beam (n.d.) in which he alleges deliberate starvation of the German population after the end of WWII and states ‘The number of Jews killed during the war were inflated not by thousands, but by millions. The fiendish descriptions of German soap factories, lamp shades, and “death camps” during the war were for the purpose of covering up the real holocaust going on after the war ended.’
10. Perhaps the highest profile example being Houston-based lawyer Kirk Lyons, who has represented members of the Aryan Nations including Louis Beam and also Fred Leuchter, whose report alleging that no gas chambers existed at Auschwitz was funded by Ernst Zundel and is featured on the ‘Zundelsite’.
11. Imojo, for example, ( is a consortium of business interests whose Web site is hosted in the USA and the Kigali-based Banque de Commerce, de Developpement et d’Industrie Web site ( is hosted in South Africa. See Carmichael (2002) for a more comprehensive account.
12. At this point it must be stressed that the model of ‘education and avoidance’ advanced in Stephenson’s novel seems to involve instruction in guerrilla warfare techniques rather than the establishment of community memory resources.
14. While the occupants of Sealand claim that [their] independence was upheld in a 1968 British court decision where the judge held that Rough’s Tower stood in international waters and did not fall under the legal jurisdiction of the United Kingdom (, the UK government differs in its interpretation. Commenting on claims that Haven’s servers would fall outside UK jurisdiction, a UK Home Office spokesperson stated, “the UK does not recognise Sealand as an independent state. It is within UK territorial waters. If they set up a computer provider there, we may require them to provide us with an intercept capability” (cited in Cohen, 2000).
15. See for example, the Data Haven project (
16. The development of XML is coordinated by the World Wide Web consortium (, while Goldfarb & Prescod (2000) provide a useful review of practical applications of XML. The value of XML in the storage, transport and analysis of qualitative data is beginning to be recognized by social scientists: see, for example, Muhr (2000) and Carmichael (2002b ).
17. In the UK, for example, acts as a central mirror for software resources for academic institutions; globally, ‘SunSites’ are regional mirrors, generally located within university networks but open to public access, from which users can download current versions of software and access archives and other resources.
18. As suggested previously, however, Holocaust deniers and other extremists may themselves use some of these tools and strategies. The Z�ndelsite is propagated across the Internet using mirroring, and peer-to-peer technologies, with their potential for the circumvention of censorship, might – whatever the intentions of their developers – ultimately prove useful tools for ‘leaderless resistance’ organizations seeking to operate clandestinely across national or international networks.
21. The National Memorial to the Genocide, unveiled in 1998, is located within the boundary of Kigali airport a rather than at any of the genocide sites. Its hasty construction, in advance of a visit by US President Bill Clinton, as well as its location, has been criticized by survivors’ groups.
22. The ‘Nameless Library’ memorial in Judenplatz, Vienna, designed by Rachel Whiteread, was finally unveiled in October 2000, nearly four years later then planned after a series of artistic debates and political interventions. Debate continues in Germany as to the most appropriate form of commemoration of victims of the Holocaust. See Young (2000, especially chap. 7) for a review of the issues and these debates.


African Rights (1995). Rwanda: Death, despair and defiance. London: African Rights.

Anderson, R. J. (1996). The eternity service. Retrieved November 21, 2002 from

Arquilla, J., & Ronfeldt, D. (1997a). The advent of netwar. In J. Arquilla & D. Ronfeldt (Eds.), In Athena’s camp (pp. 275-293). Santa Monica, CA: RAND.

Arquilla, J., & Ronfeldt, D. (1997b). Looking ahead: Preparing for information-age conflict. In J. Arquilla & D. Ronfeldt (Eds.), In Athena’s camp (pp. 439-493). Santa Monica, CA: RAND.

Bauer, M. (2001). Battening down the hatches with Bastille. Linux Journal, 84, 30+.

Beam, L. (1992). Leaderless resistance, The Seditionist 12. Retrieved November 21, 2002 from

Beam, L. (n.d.). The Holocaust as a mechanism for suppressing the truth. Retrieved November 21, 2002, from

Brown, A. (2001). Red Rover. In A. Oram (Ed.), Peer-to-peer: Harnessing the power of disruptive technologies (pp.133-144). Sebastopol, CA: O’Reilly Associates.

Brunn, S., & Cottle, C. (1997). Small states and cyberboosterism. The Geographical Review 87(2), 240-258.

Carmichael, P. (2002a). Information interventions, media development and the Internet. In M. Price & M. Thompson (Eds.), Forging peace: Information, human rights, and the management of media space (pp. 365-392). Edinburgh: Edinburgh University Press.

Carmichael, P. (2002b). Extensible Markup Language and qualitative data analysis. Forum Qualitative Sozialforschung: Special Edition on Using Technology in the Qualitative Research Process 3(2). Retrieved November 21, 2002 from

CERT (1999). Distributed denial of service tools (CERT incident note IN-99-07). CERT Coordination Center Software Engineering Institute. Retrieved November 21, 2002 from

Clark, I (1999). A distributed decentralised information storage and retrieval system. Retrieved November 21, 2002 from

Cohen, D. (2000, June 6). Cold water poured on Sealand security. The Guardian. Retrieved November 21, 2002 from,3604,328707,00.html.

Collin, M. (2001). This is Serbia calling: Rock ‘n’ roll radio and Belgrade’s underground resistance. London: Serpent’s Tail.

Curtin, M. (2000). On guard: Fortifying your site against attack. Web Techniques, 5(4), 46-50.

Dingledine, R., Freedman M., & Molnar, D. (2001). Free haven. In A. Oram (Ed.), Peer-to-peer: Harnessing the power of disruptive technologies (pp.159-187). Sebastopol, CA: O’Reilly Associates.

  1. European Commission against Racism and Intolerance (ECRI) (2001). Annual report on ECRI’s activities covering the period from 1 January to 31 December 2000. Strasbourg: Council of Europe.Garfinkel, S. (2000). Welcome to Sealand. Now bugger off. Wired, 8(7). Retrieved November 21, 2002 from, A., & Yianilos, P. (1998). Towards an archival intermemory. In Proceedings of I.E.E.E. International Forum on Research and Technology Advances in Digital Libraries (pp. 147-156).

    Goldfarb, C. F., & Prescod, P. (2000). The XML Handbook. Upper Saddle River, NJ: Prentice Hall.

    Gourevitch, P. (1998). We wish to inform you that tomorrow we will be killed with our families. London: Macmillan/Picador.

    Harknett, R. J. (1996). Information warfare and deterrence. Parameters: US Army War College Quarterly, Autumn, 93-107.

    Harwood, R. [Richard Verrall] (1975) Did six million really die? Richmond, VA: Historical Review Press.

    Hatewatch (2000). Hacking and hate: Virtual attacks with real consequences. Retrieved April 12, 2001 from

    Human Rights Watch (1996). Shattered lives: sexual violence during the Rwandan genocide and its aftermath. Retrieved November 20, 2002 from

    Kan, G. (2001). Gnutella. In A. Oram (Ed.), Peer-to-peer: Harnessing the power of disruptive technologies (pp.94-122). Sebastopol, CA: O’Reilly Associates.

    Kubiatowicz, J., Bindel, D., Chen, Y., Czerwinski, S., Eaton, P., Geels, D., Gummadi, R., Rhea, S., Weatherspoon, H., Weimer, W., Wells, C., & Zhao, B. (2000). OceanStore: An architecture for global-scale persistent storage. In Proceedings of the Ninth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2000), Cambridge, MA. Retrieved November 21, 2002 from &coll=portal&dl=ACM&CFID=6858380&CFTOKEN=51718380

    Markovic, I. (1998, February). Tactical media as a tool for survival in the war zone. In J. Beasley-Murray, P. Husbands, & V. Brown (Chairs), Globalization from below: Contingency, conflict, contestation in historical perspective, Duke University, Durham, North Carolina.

    Miller, A. (2000, February 17). NNTP IP Address spoofing, tracing abuse. Message posted to the BIND Users electronic mailing list, archived at .

    Muhr, T. (2000) Increasing the reusability of qualitative data with XML. Forum Qualitative Sozialforschung, 1(3). Retrieved November 20, 2002 from

    Northcutt, S., & Novak, J. (2001). Network intrusion detection: An analyst’s handbook (2nd ed.). Indianapolis, IN: New Riders.

    Prunier, G. (1995). The Rwanda crisis 1959-1994: History of a genocide. London: Charles Hurst and Co.

    SANS Institute (2000). Consensus roadmap for defeating distributed denial of service attacks: A project of the partnership for critical infrastructure security ( version 1.10). Retrieved November 21, 2002 from

    Stephenson, N. (1999). Cryptonomicon. London: Heinemann.

    Wack, J., & Carnahan, L. (1995). Keeping your site comfortably secure: An introduction to Internet firewalls. National Institute of Standards and Technology, U.S. Department of Commerce. Retrieved November 21, 2002, from

    Waldman, M., Rubin, A., & and Cranor, L. (2000, August). Publius: A robust, tamper-evident, censorship-resistant Web publishing system. In S. Bellovin & G. Rose (Chairs), 9th Usenix Security Symposium, Denver, Colorado.

    Waldman, M., Cranor, L., & Rubin, A. (2001). Trust. In A. Oram (Ed.), Peer-to-Peer: Harnessing disruptive technologies (pp. 242-270). Sebastopol, CA: O’Reilly Associates.

    Whine, M. (1999). Cyberspace: A new medium for communication and command and control by extremists. Studies in Conflict and Terrorism 22(3), 231-245.

    Wiley, B. (2001). Interoperability through gateways. In A. Oram (Ed.), Peer-to-peer: Harnessing disruptive technologies (pp.381-392). Sebastopol, CA: O’Reilly Associates.

    Yee, D. (2001). Cyberhate conference report. Retrieved November 21, 2002 from

    Young, J. (2000). At memory’s edge: After-images of the Holocaust in contemporary art and architecture. London: Yale University Press.

    Zwicky, E., Cooper, S., & Chapman, D. (2000). Building Internet firewalls (2nd Ed.). Sebastopol, CA: O’Reilly Associates.

About the Author

Patrick Carmichael is Lecturer in IT and Education at the University of Reading, UK. His research is largely concerned with the development of knowledge management tools for educational and other civil society projects in the UK, S.E. Europe and Africa. He is author of “Information Interventions, Media Development and the Internet” in Price, M., & Thompson, M. (Eds.), Forging peace: Intervention, human rights, and the management of media space (Edinburgh University Press, 2002) and is currently editing a collection of essays on Academic Collaboration and Networking in S.E. Europe to be published by the Austrian Institute of East and Southeast European Studies (OSI).
Address: Institute of Education, Bulmershe Court, University of Reading, Reading, Berkshire RG6 1HY UK. Ph: 01189 875123 ext. 4867.

©Copyright 2003 Journal of Computer-Mediated Communication


Comments are closed.