Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Equivalent-accuracy accelerated neural-network training using analogue memory

Abstract

Neural-network training can be slow and energy intensive, owing to the need to transfer the weight data for the network between conventional digital memory chips and processor chips. Analogue non-volatile memory can accelerate the neural-network training algorithm known as backpropagation by performing parallelized multiply–accumulate operations in the analogue domain at the location of the weight data. However, the classification accuracies of such in situ training using non-volatile-memory hardware have generally been less than those of software-based training, owing to insufficient dynamic range and excessive weight-update asymmetry. Here we demonstrate mixed hardware–software neural-network implementations that involve up to 204,900 synapses and that combine long-term storage in phase-change memory, near-linear updates of volatile capacitors and weight-data transfer with ‘polarity inversion’ to cancel out inherent device-to-device variations. We achieve generalization accuracies (on previously unseen data) equivalent to those of software-based training on various commonly used machine-learning test datasets (MNIST, MNIST-backrand, CIFAR-10 and CIFAR-100). The computational energy efficiency of 28,065 billion operations per second per watt and throughput per area of 3.6 trillion operations per second per square millimetre that we calculate for our implementation exceed those of today’s graphical processing units by two orders of magnitude. This work provides a path towards hardware accelerators that are both fast and energy efficient, particularly on fully connected neural-network layers.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Mapping a fully connected neural network onto NVM arrays.
Fig. 2: Schematic of an analogue-memory unit cell.
Fig. 3: Simulated response of different unit cells to nearly offsetting weight-update requests.
Fig. 4: Mixed hardware–software results on the MNIST dataset.
Fig. 5: Mixed hardware–software results on the MNIST-backrand and CIFAR-10/100 datasets.
Fig. 6: Accuracy comparison and effect of different techniques.

Similar content being viewed by others

References

  1. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  PubMed  ADS  CAS  Google Scholar 

  2. Coates, A. et al. Deep learning with COTS HPC systems. In Proc. 30th International Conference on Machine Learning 1337–1345 (Association for Computing Machinery, 2013).

  3. Gupta, S., Agrawal, A., Gopalakrishnan, K. & Narayanan, P. Deep learning with limited numerical precision. In Proc. 30th International Conference on Machine Learning 1737–1746 (Association for Computing Machinery, 2015).

  4. Merolla, P., Appuswamy, R., Arthur, J., Esser, S. K. & Modha, D. Deep neural networks are robust to weight binarization and other non-linear distortions. Preprint at https://arxiv.org/abs/1606.01981 (2016).

  5. Nurvitadhi, E. et al. Can FPGAs beat GPUs in accelerating next-generation deep neural networks? In Proc. 2017 ACM/SIGSA International Symposium of Field-Programmable Gate Arrays 5–14 (Association for Computing Machinery, 2017).

  6. Jouppi, N. P. et al. In-datacenter performance analysis of a tensor processing unit. In Proc. 2017 International Symposium on Computer Architecture 1–12 (Association for Computing Machinery, 2017).

  7. Merolla, P. A. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668–673 (2014).

    Article  PubMed  ADS  CAS  Google Scholar 

  8. Esser, S. K. et al. Convolutional networks for fast, energy-efficient neuromorphic computing. Proc. Natl Acad. Sci. USA 113, 11441–11446 (2016).

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  9. Morie, T. & Amemiya, Y. An all-analog expandable neural network LSI with on-chip backpropagation learning. IEEE J. Solid-State Circuits 29, 1086–1093 (1994).

    Article  ADS  Google Scholar 

  10. Burr, G. W. et al. Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses), using phase-change memory as the synaptic weight element. In 2014 IEEE International Electron Devices Meeting T29.5 (IEEE, 2014).

  11. Burr, G. W. et al. Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses), using phase-change memory as the synaptic weight element. IEEE Trans. Electron Dev. 62, 3498–3507 (2015).

    Article  ADS  Google Scholar 

  12. Gokmen, T. & Vlasov, Y. Acceleration of deep neural network training with resistive cross-point devices: design considerations. Front. Neurosci. 10, 333 (2016).

    Article  PubMed  PubMed Central  Google Scholar 

  13. Burr, G. W. et al. Neuromorphic computing using non-volatile memory. Adv. Physics X 2, 89–124 (2017).

    Google Scholar 

  14. Yu, S. et al. Scaling-up resistive synaptic arrays for neuro-inspired architecture: challenges and prospect. In 2015 IEEE International Electron Devices Meeting 17.3 (IEEE, 2015).

  15. Gao, L. et al. Fully parallel write/read in resistive synaptic array for accelerating on-chip learning. Nanotechnology 26, 455204 (2015).

    Article  PubMed  CAS  Google Scholar 

  16. Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).

    Article  PubMed  ADS  CAS  Google Scholar 

  17. Jang, J.-W., Park, S., Burr, G. W., Hwang, H. & Jeong, Y.-H. Optimization of conductance change in Pr1−xCa x MnO3-based synaptic devices for neuromorphic systems. IEEE Electron Device Lett. 36, 457–459 (2015).

    Article  ADS  CAS  Google Scholar 

  18. Jeong, Y. J., Kim, S. & Lu, W. D. Utilizing multiple state variables to improve the dynamic range of analog switching in a memristor. Appl. Phys. Lett. 107, 173105 (2015).

    Article  ADS  CAS  Google Scholar 

  19. Kaneko, Y., Nishitani, Y. & Ueda, M. Ferroelectric artificial synapses for recognition of a multishaded image. IEEE Trans. Electron Dev. 61, 2827–2833 (2014).

    Article  ADS  CAS  Google Scholar 

  20. Nandakumar, S. R. et al. Mixed-precision training of deep neural networks using computational memory. Preprint at https://arxiv.org/abs/1712.01192 (2017).

  21. van de Burgt, Y. et al. A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing. Nat. Mater. 16, 414–418 (2017).

    Article  PubMed  ADS  CAS  Google Scholar 

  22. Agarwal, S. et al. Achieving ideal accuracies in analog neuromorphic computing using periodic carry. In 2017 Symposium on VLSI Technology T13.2 (IEEE, 2017).

  23. Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).

    Article  Google Scholar 

  24. Krizhevsky, A. Learning Multiple Layers of Features From Tiny Images. Ch. 3, https://www.cs.toronto.edu/~kriz/cifar.html (2009).

  25. Narayanan, P. et al. Towards on-chip acceleration of the backpropagation algorithm using non-volatile memory. IBM J. Res. Develop. 61, 11 (2017).

    Article  Google Scholar 

  26. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by backpropagating errors. Nature 323, 533–536 (1986).

    Article  MATH  ADS  Google Scholar 

  27. Xu, Z. et al. Parallel programming of resistive cross-point array for synaptic plasticity. Procedia Comput. Sci. 41, 126–133 (2014).

    Article  Google Scholar 

  28. Papandreou, N. et al. Programming algorithms for multilevel phase-change memory. In 2011 IEEE International Symposium on Circuits and Systems 329–332 (IEEE, 2011).

  29. Alibart, F., Gao, L., Hoskins, B. D. & Strukov, D. B. High-precision tuning of state for memristive devices by adaptable variation-tolerant algorithm. Nanotechnology 23, 075201 (2012).

    Article  PubMed  ADS  CAS  Google Scholar 

  30. Hu, M. et al. Dot-product engine for neuromorphic computing: programming 1T1M crossbar to accelerate matrix-vector multiplication. In Proc. 53rd Annual Design Automation Conference 19 (Association for Computing Machinery, 2016).

  31. Fuller, E. J. et al. Li-ion synaptic transistor for low power analog computing. Adv. Mater. 29, 1604310 (2017).

    Article  CAS  Google Scholar 

  32. Kim, S., Gokmen, T., Lee, H.-M. & Haensch, W. E. Analog CMOS-based resistive processing unit for deep neural network training. In 2017 IEEE 60th International Midwest Symposium on Circuits and Systems 422–425 (IEEE, 2017).

  33. Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning Ch. 8 (MIT Press, 2016).

  34. Donahue, J. et al. DeCAF: a deep convolutional activation feature for generic visual recognition. In Proc. 31st International Conference on Machine Learning 647–655 (Association for Computing Machinery, 2014).

  35. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the Inception architecture for computer vision. Preprint at https://arxiv.org/abs/1512.00567 (2015).

  36. Mujtaba, H. Nvidia Volta GV100 12nm FinFET GPU detailed – Tesla V100 specifications include 21 billion transistors, 5120 CUDA cores, 16 GB HBM2 with 900 GB/s bandwidth. Wccftech https://wccftech.com/nvidia-volta-gv100-gpu-tesla-v100-architecture-specifications-deep-dive/ (2017).

  37. Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735–1780 (1997).

    Article  PubMed  CAS  Google Scholar 

  38. Cho, K., van Merrienboer, B., Bahdanau, D. & Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches. Preprint at https://arxiv.org/abs/1409.1259 (2014).

  39. Burr, G. W. et al. Access devices for 3D crosspoint memory. J. Vac. Sci. Technol. B 32, 040802 (2014).

    Article  CAS  Google Scholar 

  40. Narayanan, P. et al. Reducing circuit design complexity for neuromorphic machine learning systems based on non-volatile memory arrays. In 2017 IEEE International Symposium on Circuits and Systems 1–4 (IEEE, 2017).

  41. Ielmini, D., Lacaita, A. L. & Mantegazza, D. Recovery and drift dynamics of resistance and threshold voltages in phase-change memories. IEEE Trans. Electron Dev. 54, 308–315 (2007).

    Article  ADS  CAS  Google Scholar 

  42. Pelgrom, M. J. M., Duinmaijer, A. C. J. & Welbers, A. P. G. Matching properties of MOS transistors. IEEE J. Solid-State Circuits 24, 1433–1439 (1989).

    Article  ADS  Google Scholar 

  43. Cao, Y. What is predictive technology model (PTM)? SIGDA Newsl. 39, 1 (2009).

    Article  Google Scholar 

  44. Bengio, Y. Louradour, J. Collobert, R. & Weston, J. Curriculum learning. In Proc. 26th Annual International Conference on Machine Learning 41–48 (ACM, 2009).

Download references

Acknowledgements

We acknowledge management support from B. Kurdi, C. Lam, W. Wilcke, S. Narayan, T. C. Chen, W. Haensch, R. Divakaruni, J. Welser and D. Gil, and discussions with P. Solomon, S. Kim, A. Sebastian, K. Hosokawa and S. C. Lewis. This work was performed as part of the ‘Neuromorphic Devices & Architectures’ project under the auspices of the IBM Research Frontiers Institute (https://www.research.ibm.com/frontiers). We acknowledge advice and support from H. Riel, S. Gowda, D. Maynard and the member companies of the IBM RFI.

Reviewer information

Nature thanks G. C. Adam, R. Legenstein and the other anonymous reviewer(s) for their contribution to the peer review of this work.

Author information

Authors and Affiliations

Authors

Contributions

G.W.B. developed the multiple-conductances-of-varying-significance and polarity-inversion techniques; P.N. and G.W.B. designed the 3T1C unit cell; G.W.B., R.M.S., C.d.N., I.B. and P.N. developed the neural-network simulation software; R.M.S., I.B., C.d.N., S.S., M.B., N.C.P.F. and S.A. used the simulator to develop insights key to the success of the experiment; G.W.B. designed and S.A. extended the experimental apparatus; S.A. performed the experiments; H.T. designed the transfer learning experiment and performed the TensorFlow software training; P.N., G.W.B. and S.A. developed the SPICE modelling approach; P.N. performed the power analysis; M.G., S.A. and G.W.B. developed the triage approach used in the experiment; and all authors contributed to the writing and editing of the manuscript.

Corresponding author

Correspondence to Geoffrey W. Burr.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended Data Fig. 1 Flow chart comparing eventual and currently implemented DNN acceleration approaches.

a, Comparison between an eventual analogue-memory-based hardware implementation and our mixed software–hardware experiment. Although we do not implement CMOS neurons, we mimic their behaviour closely. In both schemes, weight update is performed on only the 3T1C g devices, and these contributions are later transferred to the PCM devices (G+ and G). Owing to wall-clock throughput issues in our experiment, we have to perform all of the weight transfers at once. By contrast, in an eventual hardware implementation, weight transfer would take place on a distributed, column-by-column basis. Ideally, transfer for any weight column would be performed at a point in time when the neural-network computation, focused on some other layer, leaves that particular array core temporarily idle. b, Guidelines for optimizing the choice of transfer interval, depending on the time constant of the capacitor and the dynamic range of g. Because training of one image is performed in 240 ns, training of 8,000 images is performed in 8,000 × 240 ns = 1.92 ms, which is a substantial fraction of the time-constant of the capacitor (5.16 ms). Despite allowing more of the dynamic range of g to be used, a longer transfer interval would probably suffer from poor retention of information in any volatile g device. However, even in the ideal case of an infinitely-long time constant, the transfer interval would still need to be limited, owing to the finite dynamic range of g. A long transfer interval would probably result in g values saturating owing to weight updates, leading to loss of training information before transfer. c, Guidelines for optimizing the choice of gain factor F. We define ‘efficacy of post-transfer tuning’ as the inverse of the overall residual error after g tuning. Bcause a larger gain factor F means more available dynamic range for each weight, larger F is desirable. However, large F also amplifies any programming errors on the PCM devices due to intrinsic device variability and limits the correction that g can provide during post-transfer tuning. The efficacy would definitely decrease monotonically, although perhaps not linearly as is sketched here. The value we chose (F = 3) represents a reasonable trade-off for the PCM and 3T1C devices used here. For other situations, F can be initially estimated as F = DR g /σ, where DR g is the g dynamic range and σ is the standard deviation of the PCM programming error. Additional optimization comes with neural-network training, which includes the weak effect of drift contribution.

Extended Data Fig. 2 Weight-update requests and resulting net weight change observed during neural network training.

ad, Simulation results based on MNIST 20-epoch simulations for the 2PCM + 3T1C cell with full CMOS variability and transfer polarity inversion (matched with the experimental results; a, b) and for the 2PCM cell (c, d). a, c, Correlation between the aggregate weight update across 16,000 training images (for 2PCM + 3T1C, this corresponds to two consecutive transfer intervals) and the total number of pulses applied to obtain this weight update. b, d, Correlation between the aggregate number of pulses and the total number of programming pulses applied. The points chosen for Fig. 3 (±100, 1,000 for 2PCM + 3T1C and ±10, 50 for 2PCM) represent typical values requested by the backpropagation algorithm. Insets show vertical cross-sections at \(\sum {\rm{\Delta }}W=0\), where the aggregate sum of all individual weight changes ΔW is zero (sum of pulses is zero).

Extended Data Fig. 3 Experimental distributions for different datasets.

(Extension of Fig. 5.) af, Weight probability density functions (PDFs) and cumulative distribution functions (CDFs) of device conductances for MNIST-backrand (a, b), CIFAR-10 transfer learning (c, d) and CIFAR-100 transfer learning (e, f). Results are shown for the initial condition and increasing epochs, from 1 to 20. For the CIFAR-100 experiment only, we increased the transfer interval to 16,000 images to reduce the overall wall-clock time.

Extended Data Fig. 4 Effect of different techniques on neural-network training.

(Extension of Fig. 6.) ad, Simulation results as in Fig. 6b, extended to all experiments performed: MNIST results (as in Fig. 6b; a), MNIST-backrand (b), CIFAR-10 transfer (c) and CIFAR-100 transfer (d). We introduce two parameters, xLR and δLR, to modify the crossbar-compatible weight-update scheme from its original conception10. The upstream neurons fire a number of weight-update pulses based on the x input signal, the global learning rate η and the xLR coefficient; downstream neurons fire pulses depending on the error signal, the global η and new δLR coefficient. xLR and δLR are both constant throughout training: xLR enables differentiation between upstream and downstream pulsing, but is constant across all layers; δLR enables careful tuning of the importance of δ for each weight layer. xLR modulation can provide substantial accuracy benefits for MNIST-backrand (b) and δLR modulation is beneficial for CIFAR-100 and particularly for MNIST (a, d). Although momentum and learning-rate (LR) decay are commonly used techniques33, their absence would not have greatly affected our experimental results. Example triage mostly provides a wall-clock advantage, but also a slight improvement in accuracy for CIFAR-10/100 transfer learning by avoiding ‘useless’ weight updates.

Extended Data Fig. 5 The safety-margin concept.

a, When the network classifies the output correctly (for example, the highest neuron output matches the highest ground truth), the safety margin is the positive difference between the correct neuron and the next-largest neuron. b, When the classification is incorrect, the safety margin is a negative number that indicates the gap by which the output neuron failed to be the highest neuron value. Preferably, we would like to calculate the safety margin for every image in each epoch, because safety margins change after each backpropagation. This is the choice made within our experiment; in a full-chip implementation of analogue-memory-based neural-network hardware accelerator with an effective minibatch size of 1, this would be fairly straightforward. Alternatively, either for minibatch-based training or for analogue hardware, we envision using a highly pipelined copy of the network designed for fast forward inference to compute safety margins using a recent copy of the network weights. These slightly ‘stale’ safety margins could then be used to implement example triage. c, Focus probability from 0% to 100% as a function of safety margin defined from −1 to 1. For all safety margins below some ‘acceptable’ threshold, the probability of choosing to perform backpropagation on this training example is 100%. As the safety margin increases above the acceptable threshold, the focus probability decreases linearly to a non-zero minimum focus probability, to ensure that some number of already well-learned images are also backpropagated despite their high safety margin. The mapping of safety margin to focus probability can be changed during training. In addition, reducing either the focus probability or the learning rate for examples with large negative safety margins (pink dotted line) avoids damage to overall generalization in pursuit of training examples that the network may never be able to successfully classify.

Extended Data Fig. 6 Safety-margin evolution during training.

During training (shown here for MNIST), the cumulative distribution of the safety margin shifts to the right, as training improves performance on the training examples. The intercept at a safety margin of zero represents the training error. Example triage can be thought of as the realization that the network does not need to train on all of the examples in the far right of this cumulative distribution, but should instead focus on those at small positive safety margins and below, with only a few training examples chosen from among those at high safety margins. The farther the safety margin distribution moves to the right, the more of an acceleration factor that example triage can provide. Example triage can be considered a form of curriculum learning44 based on the safety margin, as a highly accurate analogue measure of the current degree of certainty of the neural network. However, a substantial difference is that curriculum learning focuses on the beginning of training, with the philosophy of starting with easy examples and moving to difficult training examples. By contrast, example triage becomes effective only once the network shows some degree of performance on the training set, and is then designed to skip over easy examples in favour of difficult training examples.

Extended Data Fig. 7 Experimental PCM programming distributions.

The measured cumulative distribution function of the conductances of 512 × 1,024 devices programmed from full reset state with eight-step set transition rampdown pulse sequences ranging from 1.7 ns to 550 ns in step-size (for example, from 13.6 ns to 4.4 μs in total duration) is shown. Even though the degree of control is worse for high conductances (above 20 μS), to the extent that the monotonicity of the mapping from duration to conductance is disrupted, the vast majority of conductances are programmed to conductances below 20 μS (see Fig. 4 and Extended Data Fig. 9).

Extended Data Fig. 8 Analysis of weight transfer from lower- to higher-significance conductance pairs.

ac, Distributions obtained before and after the last transfer in the MNIST experiment: g and gshared distributions before transfer (a), the voltage on the capacitor of g (b) and the distribution of weights (c). gshared devices are implemented as an average of the read current from three 3T1C devices for every 128 dedicated g devices to help to reduce variability. Just before transfer, the voltages on both g and gshared are programmed to 0.5 V after their contribution to the weight has been extracted. df, Just after the PCM transfer, the polarity of g is inverted; the dedicated g devices are then tuned to correct the transfer error during PCM programming operation. This leads to a broad distribution of voltages on these capacitors, centred at lower voltages than just before transfer (e). During the long transfer interval, charge leakage in all capacitors (through both NFETs and the PFET) causes voltages to increase towards about 0.8 V. During post-transfer tuning, the lowest voltage available to the charge subtraction circuitry is increased so that no 3T1C device can be programmed below 0.25 V (cut-off visible in e). Because all 3T1C conductances below that capacitor voltage are effectively zero (see Extended Data Fig. 10a), if any device were allowed to return to the weight-update operations with such an extremely low capacitor voltage, the network would be forced to fire many positive weight updates before it could effectively change that weight. Although g and gshared show different shapes, the weight distribution is nearly the same as before transfer. The last transfer is shown not because it is the easiest but because it is the most important. The network has very little ability to recover from mistakes made during these last few transfers. However, data extracted for any of the other transfers throughout training would be almost indistinguishable from those shown here for the last transfer operation.

Extended Data Fig. 9 Effect of PCM imperfections on weight transfer.

Correlation maps obtained from the last two transfers in the MNIST experiment illustrate a typical transfer operation. The target weight Wtransfer that we attempt to write into the PCM devices is not exactly the overall weight W, but instead Wtransfer = W − offset − [g(V = 0.5 V) − gshared(V = 0.5 V)]. The final two terms are the residual difference between the conductances of the g and gshared devices even when initialized to the same voltage, which allows the PCM devices to compensate partially for CMOS variability during transfer. The offset, equal to 2 μS, is added because g devices are not equally good at compensating positive and negative conductance errors. At the initialization voltage of 0.5 V, device conductance is relatively small (see Extended Data Fig. 10a), providing less dynamic range to move to smaller conductances and to correct PCM devices programmed to weights that are too positive. The initial 0.5 V was chosen carefully, to accommodate substantial ‘decay’ towards 0.8 V, providing much more dynamic range for increasing 3T1C conductance. A positive offset value strongly favours negative errors, allowing us to exploit the capability for g values to increase. When Wtransfer is positive but smaller than the offset we reset both PCM devices and use g to correct the residual error. a, Correlation between the weight portion encoded in PCMs before transfer, such as F(G+ − G), with Wtransfer. Here we expect a difference because the neural-network training has changed the weights—we now need to checkpoint these weight changes from volatile storage on the 3T1C devices into non-volatile storage on the PCM devices. b, Correlation between the desired Wtransfer conductance differences and the actual F(G+ − G) values obtained after PCM programming operation. With perfect devices and no offset, this should be a diagonal line along y = x. The variability we see is caused partly by PCM programming error (unintended), partly by the intentional offset and partly by CMOS initialization mismatch (where we are intentionally aiming for a ‘wrong’ PCM conductance difference to help to compensate for our flawed CMOS devices). c, Correlation between the weights before (Wpre) and after (Wpost) transfer, after post-transfer tuning of g to compensate for programming errors in b. The goal of the transfer operation is to obtain Wpost = Wpre, which would correspond to all points falling on the diagonal y = x. The effect of post-transfer tuning is clear by comparing the variability in b to the near-ideal behaviour in c. df, As in ac, but for negative polarity transfer. Because the polarity of g is inverted, the offset is negative, and so the large dynamic range can be used to increase g to compensate for positive errors in PCM weight.

Extended Data Fig. 10 SPICE modelling of CMOS variability.

af, Monte Carlo circuit simulations of parameter variability in 3T1C cells: measured conductance versus instantaneous voltage on the capacitor VC (a); PDF of the measured conductance at VC = 0.5 V (b); change in voltage versus the instantaneous voltage for up pulses (c); PDF of change in up voltage at VC = 0.5 V (d); change in voltage versus the instantaneous voltage for down pulses (e); and PDF of change in down voltage at VC = 0.5 V (f). Each graph shows data from 1,000 trials. Bold lines in a, c and e and dotted lines in b, d and f show the nominal transistor response. a, b, Variability in the read transistor whose gate is tied to the capacitor; cf, variability due to variation in threshold voltage in the PMOS pull-up/NMOS pull-down FETs.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ambrogio, S., Narayanan, P., Tsai, H. et al. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 558, 60–67 (2018). https://doi.org/10.1038/s41586-018-0180-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41586-018-0180-5

This article is cited by

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing