2008 WholeBrainEmulation

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Whole Brain Emulation, Computational Neuroscience.

Notes

Cited By

Quotes

Abstract

Whole brain emulation (WBE), the possible future one-to-one modelling of the function of the human brain, is academically interesting and important for several reasons:

WBE represents a formidable engineering and research problem, yet one which appears to have a well-defined goal and could, it would seem, be achieved by extrapolations of current technology. This is unlike many other suggested radically transformative technologies like artificial intelligence where we do not have any clear metric of how far we are from success.

In order to develop ideas about the feasibility of WBE, ground technology foresight and stimulate interdisciplinary exchange, the Future of Humanity Institute hosted a workshop on May 26 and 27, 2007, in Oxford. Invited experts from areas such as computational neuroscience, brain-scanning technology, computing, nanotechnology, and neurobiology presented their findings and discussed the possibilities, problems and milestones that would have to be reached before WBE becomes feasible.

The workshop avoided dealing with socioeconomic ramifications and with philosophical issues such as theory of mind, identity or ethics. While important, such discussions would undoubtedly benefit from a more comprehensive understanding of the brain - and it was this understanding that we wished to focus on furthering during the workshop. Such issues will likely be dealt with at future workshops.

Conclusions

It appears feasible within the foreseeable future to store the full connectivity or even multistate compartment models of all neurons in the brain within the working memory of a large computing system.

Achieving the performance needed for real-time emulation appears to be a more serious computational problem. However, the uncertainties in this estimate are also larger since it depends on the currently unknown number of required states, the computational complexity of updating them (which may be amenable to drastic improvements if algorithmic shortcuts can be found), the presumed limitation of computer hardware improvements to a Moore’s law growth rate, and the interplay between improving processors and improving parallelism16. A rough conclusion would nevertheless be that if electrophysiological models are enough, full human brain emulations should be possible before mid-century. Animal models of simple mammals would be possible one to two decades before this. 16 This can be seen in the ongoing debates about whether consumer GPU performance should be regarded as being in the $0.2 range per GFLOPS; if WBE can be mapped to such high-performance cheap special purpose hardware several orders of magnitude of improvement can be achieved.

Validation

As computer software increased in complexity, previous methods of debugging became insufficient, especially as software development moved from small groups to large projects in large organisations. This led to the development of a number of software testing methodologies aiming at improving quality (Gelperin and Hetzel, 1988). Currently neuroscience appears to be in an early “debugging” paradigm where data and procedures certainly are tested in various ways, but usually just through replication or as an ad hoc activity when something unexpected occurs. For large-scale neuroscience testing and validation methods need to be incorporated in the research process to ensure that the process works, that the data provided to other parts of the research is accurate and that the link between reality and model is firm.

An important early / ongoing research goal would be to quantify the importance of each level of scale of phenomena to the resultant observed higher brain functions of interest. In particular, it is important to know to what level of detail they need to be simulated in order to achieve the same kind of emergent behaviour on the next level. In some cases it might be possible to “prove” this by performing simulations / emulations at different levels of resolution and comparing their results. This would be an important application of early small-scale emulations and would help pave the way for larger ones.

Ideally, a “gold standard” model at the highest possible resolution could be used to test the extent to which it is possible to deviate from this before noticeable effects occur, and to determine which factors are irrelevant for higher-level phenomena. Some of this information may already exist in the literature, some needs to be discovered through new computational neuroscience research. Exploration of model families with different levels of biological detail is already done occasionally.

A complementary approach is to develop manufactured data where the ground truth is known (“phantom datasets”), and then apply reconstruction methods on this data to see how well they can deduce the true network. For example, the NETMORPH system models neurite outgrowth which creates detailed neural morphology and network connectivity (Koene, 2008). This can then be used to make virtual slices. Multiple datasets can be generated to test the overall reliability of reconstruction methods.

Discussion

As this review shows, WBE on the neuronal / synaptic level requires relatively modest increases in microscopy resolution, a less trivial development of automation for scanning and image processing, a research push at the problem of inferring functional properties of neurons and synapses, and relatively business-as-usual development of computational neuroscience models and computer hardware. This assumes that this is the appropriate level of description of the brain, and that we find ways of accurately simulating the subsystems that occur on this level. Conversely, pursuing this research agenda will also help detect whether there are low-level effects that have significant influence on higher level systems, requiring an increase in simulation and scanning resolution.

There do not appear to exist any obstacles to attempting to emulate an invertebrate organism today. We are still largely ignorant of the networks that make up the brains of even modestly complex organisms. Obtaining detailed anatomical information of a small brain appears entirely feasible and useful to neuroscience, and would be a critical first step towards WBE. Such a project would serve as both a proof of concept and a test bed for further development. If WBE is pursued successfully, at present it looks like the need for raw computing power for real-time simulation and funding for building large-scale automated scanning / processing facilities are the factors most likely to hold back large-scale simulations.

Appendix A: Estimates of the computational capacity / demands of the human brain

The most common approach is a straightforward multiplicative estimate: given the number of neurons, the average number of synapses and an assumed amount of information per synapse or number of operations per second per synapse. This multiplicative method has been applied to microtubuli and proteins too.

However, it still might be necessary to store concentrations of several chemical species, neurotransmitter types and other data if a biologically realistic model is needed (especially the identities of the pre- and postsynaptic neurons). Some estimates of the storage requirements of brain emulation are included in the table below.

Other estimation methods are based on analogy or constraints. (Moravec, 1999) suggested exploiting the known requirements of image processing by equating them with a corresponding neural structure (the retina), and then scaling up the result. (Merkle, 1989a) used energy constraints on elementary neural operations. (Landauer, 1986) attempted an estimation based on experimental psychological memory and signal theory.

Assumption on the order of one bit of information per synapse has some support on theoretical grounds. Models of associative neural networks have an information storage capacity slightly under 1 bit per synapse depending on what kind of information is encoded (Nadal, 1991; Nadal and Toulouse, 1990). Extending the dynamics of synapses for storing sequence data does not increase this capacity (Rehn and Lansner, 2004). Geometrical and combinatorial considerations suggest 3-5 bits per synapse (Stepanyants, Hof et al., 2002; Kalisman, Silberberg et al., 2005). Fitting theoretical models to Purkinje cells suggests that they can reach 0.25 bits / synapse (Brunel, Hakim et al., 2004).

Table 10: Estimates of computational capacity of the human brain. Units have been converted into FLOPS and bits whenever possible. Levels refer to Table 2.

Table 10: Estimates of computational capacity of the human brain. Units have been converted into FLOPS and bits whenever possible. Levels refer to Table 2.
  • Source | Assumptions | Computational demands | Memory
  • (Leitl, 1995) Assuming 1010 neurons, 1,000 synapses per neuron, 34 bit ID per neuron and 8 bit representation of dynamic state, synaptic weights and delays. [Level 5] 5·1015 bits (but notes that the data can likely be compressed).
  • (Tuszynski, 2006) Assuming microtubuli dimer states as bits and operating on nanosecond switching times. [Level 10] 1028 FLOPS 8·1019 bits
  • (Kurzweil, 1999) Based on 100 billion neurons with 1,000 connections and 200 calculations per second. [Level 4] 2·1016 FLOPS 1012 bits
  • (Thagard, 2002) Argues that the number of computational elements in the brain is greater than the number of neurons, possibly even up to the 1017 individual protein molecules. [Level 8] 1023 FLOPS
  • (Landauer, 1986) Assuming 2 bits learning per second during conscious time, experiment based. [Level 1] 1.5·109 bits (109 bits with loss)
  • (Neumann, 1958) Storing all impulses over a lifetime. 1020 bits (Wang, Liu et al., 2003) Memories are stored as relations between neurons. 108432 bits (See footnote 17)
  • (Freitas Jr., 1996) 1010 neurons, 1,000 synapses, firing 10 Hz [Level 4] 1014 bits/second (Bostrom, 1998) 1011 neurons, 5·103 synapses, 100 Hz, each signal worth 5 bits. [Level 5] 1017 operations per second
  • (Merkle, 1989a) Energy constraints on Ranvier nodes. 2·1015 operations per second (1013-1016 ops/s)
  • (Moravec, 1999; Morevec, 1988; Moravec, 1998) Compares instructions needed for visual processing primitives with retina, scales up to brain and 10 times per second. Produces 1,000 MIPS neurons. [Level 3] 108 MIPS 8·1014 bits.
  • (Merkle, 1989a) Retina scale-up. [Level 3] 1012-1014 operations per second. (Dix, 2005) 10 billion neurons, 10,000 synaptic operations per cycle, 100 Hz cycle time. [Level 4] 1016 synaptic ops/s 4·1015 bits (for structural information) (Cherniak, 1990) 1010 neurons, 1,000 synapses each. [Level 4] 1013 bits
  • (Fiala, 2007) 1014 synapses, identity coded by 48 bits plus 2x36 bits for pre- and postsynaptic neuron id, 1 byte states. 10 ms update time. [Level 4] 256,000 terabytes/s 2·1016 bits (for structural information)
  • (Seitz) 50-200 billion neurons, 20,000 shared synapses per neuron with 256 distinguishable levels, 40 Hz firing. [Level 5] 2·1012 synaptic operations per secon 4·1015 - 8·1015 bits
  • (Malickas, 1996) 1011 neurons, 102-104 synapses, 100- 1,000 Hz activity. [level 4] 1015-1018 synaptic operations per secon 1·1011 neurons, each with 104 compartments running the basic Hodgkin-Huxley equations with 1200 FLOPS each (based on
  • (Izhikevich, 2004). Each compartment would have 4 dynamical variables and 10 parameters described by one byte each. 1.2·1018 FLOPS 1.12·1028 bits

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2008 WholeBrainEmulationNick Bostrom
Anders Sandberg
Whole Brain Emulation2008