Information

Difference between spiking and firing

Difference between spiking and firing


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

In the article "A Topological Paradigm for Hippocampal Spatial Map Formation Using Persistent Homology" by Y. Dabaghian, F. Mémoli, L. Frank, G. Carlsson I read some sentences with huge confusion concerning the use of the termss firing and spiking. Do they mean the same thing? It seems to me that when they say a place cell is firing or a place cell is spiking they refer to exactly the same thing. The confusion comes from using both words in the same sentence as in the following sample sentences from the article (emphasis mine):

Indeed, a rat's path through a small space can later be re-traced with a high degree of accuracy by recording hippocampal spiking activity during its explorations and then analyzing the location, size, and firing rates of a mere 40-50 place fields [… ]

To understand what algorithms the brain might use to decode hippocampal place cell firing, then, we should rely solely on the information provided by place cell spiking activity [… ]

It is, in fact, generally assumed that neurons downstream of the hippocampus interpret place cell spiking patterns based on co-firing [… ]

It should thus be possible to trace the emergence of topological information as more and more spikes are fired [… ]

There are biophysical variables (firing rates, spike amplitude, etc.) [… ]


In your quotes, the terms spiking and firing are used as synonyms. Indeed, both terms refer to the same phenomenon of action potential generation and in this context there is no physiological functional difference between spiking and firing.

However, note that in some contexts there may be a difference, namely some authors distinguish between a neuron spiking and bursting; the former referring to a tonic firing mode, the latter to a bursting firing mode (e.g. Ramcharan et al. (2000)).

Reference
- Ramcharan et al., Vis Neurosci (2000); 17(1):55-62


What is Pfizer Vaccine?

Information about Pfizer: Definition

The Pfizer vaccine is a vaccine developed by Pfizer, Inc., and BioNTech for use against the coronavirus, Covid-19.

Formation of Pfizer:

The mRNA is a nucleic acid similar to DNA, but the RNA is called a transcript, a literal copy of the DNA bases. During protein synthesis, an mRNA transcript travels to the cytoplasm of a cell where proteins are then made. The Pfizer vaccine is made by taking the mRNA, which codes for a spike protein of the virus, and then coating it with a type of lipid nanoparticle. This lipid coating functions to protect the nucleic acid. Once inserted into the body the mRNA triggers our immune system to respond.

Advantages of getting Pfizer:

The effectiveness after two doses of the Pfizer vaccine is high, at about 95%, suggesting that herd immunity can be easily achieved if enough people are vaccinated. Viruses like the Pfizer Covid-19 vaccine can be made more quickly than traditional vaccines because they do not need to use the entire virus.

Is there any Pfizer vaccine’s Disadvantages

The vaccine has to be stored at very low temperatures of -80°C to -60°C and sophisticated equipment is needed to produce the virus. Since the vaccine is made using mRNA, it means that contamination is a risk and the process of extracting and purifying the substance is highly technical, requiring a high level of skill in the manufacturing process.


Results

Generation of CaV3.3 Ca 2+ Channel Knockout Mice.

To explore the physiological function of CaV3.3, we carried out targeted disruption of the CaV3.3 gene in mice as shown in Fig. 1A. The targeting vector was designed to delete exon3 and exon4 that comprise the N terminus of the CaV3.3 protein (Amgen Co. Fig. 1A). Following successful germ-line transmission, mice heterozygous for the targeting event were interbred to obtain homozygous CaV3.3 knockouts (KOs), as confirmed by PCR and/or Southern blot (Fig. 1B). No sex bias was observed in the offspring, and the expected Mendelian ratio was observed among wild-type, heterozygous, and homozygous mutant mice (data not shown). To assess whether disruption of the CaV3.3 gene was effective, we examined the level of CaV3.3 mRNA. RT-PCR analysis showed that no CaV3.3 mRNA was produced in the CaV3.3 brain, indicating that the gene targeting resulted in a null mutation for this locus (Fig. 1C). We also used in situ hybridization to check the expression and cellular localization of CaV3.3 in wild-type and mutant mouse brains (Fig. 1D).

Targeted disruption of the mouse CaV3.3 gene and decreased T-type current in CaV3.3 KO. (A) Schematic representations of the wild-type CaV3.3 allele, targeting vector, and mutant allele. Exons were represented by black boxes. Exon3 and exon4 were designed to be deleted to generate the mutant allele. (B) Southern blot analysis of genomic DNA isolated from tails of wild-type (WT) and homozygous KO mice. The restriction enzymes and probes used are shown in A. The 12 kb segment corresponds to the wild allele the 9 kb, the targeted allele. (C) RT-PCR analysis was performed with mRNA derived from pooled samples of wild-type or the CaV3.3 mutant whole brain. PCR primers for RT-PCR were exon3–7. The 690 bp band indicates the PCR product of the wild type, whereas CaV3.3 mutant displays no band. (D) Quantitative analysis of the expression of the CaV3.3 mRNA with in situ hybridization. There is robust expression of CaV3.3 in the TRN of wild-type mouse but no expression in mutant TRN. Probes for in situ hybridization were exon3 and 4. (E) LVA Ca 2+ currents were evoked by the depolarizing pulses ranging from −100 to −40 mV in wild-type and CaV3.3 −/− neurons. (F) Current–voltage relationship displayed a significant reduction in the peak current density in CaV3.3 −/− (open circle) compared with wild type (closed circle). Data are represented as mean ± SEM *P < 0.05, **P < 0.01, ***P < 0.001 two-tailed t test.

T-Type Currents and Oscillatory Burst Firing Are Impaired in CaV3.3 KO Mice.

The functional loss of CaV3.3 was examined by whole-cell voltage clamp analysis of TRN neurons in acute brain slices from ∼3–4-wk-old KO and wild-type mice. LVA inward currents were evoked from a holding potential of –100 mV to depolarizing levels ranging from –90 to –40 mV (Fig. 1E). The peak density of a fast-inactivating current, typical of T-type Ca 2+ currents, evoked at appropriate test potentials were significantly reduced in CaV3.3 −/− TRN neurons compared with wild-type controls (Fig. 1F). These results are consistent with those reported previously (22) and suggest that CaV3.3 is responsible for most T-type Ca 2+ currents in TRN neurons. The small amount of residual current observed in CaV3.3 −/− neurons at –40 mV is probably mediated by CaV3.2 and as observed previously (22).

To identify GABAergic neurons that project from TRN to TC, we crossed the CaV3.3 mutant mice with glutamate decarboxylase 65 (GAD65)-GFP transgenic mice. Injection of a retrograde tracer into the TC region of these mice retrogradely labeled some of the GFP-positive GABAergic neurons in the dorsomedial region of TRN (Fig. 2A), allowing us to identify and characterize GABAergic projection neurons of TRN. In acute brain slices from wild-type mice, a brief hyperpolarizing current pulse that brought the membrane potential to −100 mV evoked subsequent oscillatory rebound bursting in ∼40% of projection neurons, whereas 45% showed only a single low-threshold (LT) burst, and 15% exhibited no LT bursting. In CaV3.3 −/− mice, 80% of GFP-positive projection neurons showed no LT burst firing, whereas 20% exhibited a single weak LT burst and oscillatory bursts were never observed (Freeman–Halton extension of the Fisher’s exact probability test, P = 0.0000053) (Fig. 2B). Thus, the ability of TRN neurons to fire bursts of action potentials is severely impaired in CaV3.3 −/− mice.

Altered firing properties of TRN neurons lacking CaV3.3 channels and increased susceptibility to GBL-induced SWD in CaV3.3 KO mice. (A) TRN neurons labeled with red retrograde tracer injected into TC of GAD65-GFP mice. Confocal image of TRN neurons colabeled with retrograde beads (red) and GFP (expanded image). (B) Firing characters of CaV3.3 +/+ (upper three traces) and CaV3.3 −/− (lower two traces) TRN neurons. (C) EEG traces of adult CaV3.3 +/+ (Upper) and CaV3.3 −/− (Lower) mice for 1 min before and various times after GBL injection. SWDs marked with asterisks are expended at the bottom. (D) SWD density calculated by total duration of SWD per min in CaV3.3 +/+ (black circle) and CaV3.3 −/− (red circle) mice. (E) Distribution of SWD episode duration after GBL injection. (F) Average EEG power spectrograms of CaV3.3 +/+ and CaV3.3 −/− mice during a 50-min recording. GBL was injected after 10 min of baseline recording. Data are represented as mean ± SEM *P < 0.05, **P < 0.01.

Increased GBL-Induced Absence Seizures in CaV3.3 KO or Knockdown Mice.

Because CaV3.3 is abundantly expressed in TRN but not in TC (17), we could use CaV3.3 −/− mice to determine the role of TRN neuron bursting in SWDs. Based on the current model (10), we predicted that SWDs should decrease or disappear in CaV3.3 −/− mice. To test this prediction, we examined the susceptibility of adult CaV3.3 −/− mice to the seizure-inducing drug GBL (Fig. 2C). Administration of GBL (70 mg/kg body weight, i.p.) to wild-type mice at the age of ∼10 wk induced typical paroxysmal SWDs (Fig. 2C) and behavioral arrest (Fig. S1A). Contrary to our expectations, in CaV3.3 −/− mice, SWDs were not reduced in fact, GBL more effectively induced SWDs both in density (s/min) and total duration in CaV3.3 −/− mice than in wild-type mice [repeated-measures ANOVA (rmANOVA), group effect, F(1) = 15.67, P = 0.0008 TIME effect, F(57) = 14.46, P < 0.0001 Group × Time interaction, F(57) = 4.35, P < 0.0001] (Fig. 2D), with longer SWD episode duration (Kolmogorov–Smirnov test, P = 0.00003) (Fig. 2E). The relative EEG power in the 3–5 Hz range characteristic of SWDs was significantly higher in the CaV3.3 −/− mice than wild-type for the entire recording duration after GBL treatment (Fig. 2F). Similar results were found in juvenile (age 3–4 wk) mutant mice [rmANOVA, group effect, F(1) = 1.94, P = 0.1807 time effect, F(57) = 15.2, P < 0.0001 Group × Time interaction, F(57) = 1.9, P < 0.0001] (Fig. S1 C and D). SWDs were analyzed based on the raw and filtered waveform of EEG recordings as described in detail in SI Materials and Methods (Fig. S1 A and B). The antiabsence drug ethosuximide (ETX) dramatically suppressed GBL-induced SWDs in mutant animals, which confirms that these were typical SWDs (Fig. S1B1).

Within the thalamocortical pathway, CaV3.3 is expressed both in cortex and in TRN (17). To address possible cortical contributions to the seizure phenotype of CaV3.3 −/− mice, we used virus-mediated gene silencing to knock down CaV3.3 predominantly in TRN. An adeno-associated viral (AAV) vector containing shRNA specific for CaV3.3 (Fig. S2 A and B) was injected bilaterally into the TRN of 8-wk-old GAD65-GFP mice (Fig. 3A, Upper). Three weeks later, both scrambled (control) and shRNA for CaV3.3 (shCaV3.3) visualized with red fluorescence (mCherry) colocalized with GAD65-positive (GFP expressing) neurons in TRN. Expression of virus was restricted to TRN neurons (Fig. 3A, Middle and Lower) and significantly reduced CaV3.3 mRNA expression by ∼62% in TRN compared with control (Fig. 3B). Such TRN-specific gene silencing of CaV3.3 reduced T-type Ca 2+ currents in TRN neurons (Fig. S2 C and D) and enhanced sensitivity to GBL, similar to what we observed in the CaV3.3 −/− mice. CaV3.3 knockdown caused a significantly shorter onset time and a higher density of SWD compared with mice injected with control virus [rmANOVA, group effect, F(1) = 4.52, P = 0.0468 time effect, F(57) = 9.21, P = 0 Group × Time interaction, F(57) = 1.12, P = 0.2514] (Fig. 3 C and D). An EEG power spectrum in the 3–5 Hz range also was significantly higher in CaV3.3 knockdown mice than in control mice (Fig. 3E). Therefore, these results suggest that an increased GBL-induced absence seizure in CaV3.3 KO was mediated by deletion of CaV3.3 at TRN, not cortex.

Enhanced susceptibility to GBL-induced SWD after microinjection of AAV-shCaV3.3 in the TRN of GAD65-GFP mice. (A) Confocal images showing expression of AAV-control and AAV-shCaV3.3 (yellow circles indicate the TRN region in Top panels). Virus-infected neurons are colabeled with GFP (green, GAD65-GFP–positive neurons red, AAV-control–infected neurons in Middle panel and AAV-shCaV3.3 in Bottom panel yellow, merged). (B) Representative images of in situ hybridization (Upper, AAV-control Lower, AAV-shCaV3.3). (C) EEG traces of AAV-control (Left) and AAV-shCaV3.3 mice (Right) for 1 min before and various time after GBL injection. (D) SWD density calculated by total duration of SWD per min in AAV-control– (black circle) and AAV-shCaV3.3– (red circle) injected mice. (E) Average EEG power spectrograms of AAV-control– and AAV-shCaV3.3–injected mice during a 50-min recording. GBL was injected after 10 min of baseline recording. Data are represented as mean ± SEM, *P < 0.05.

Complete Deletion of Burst Firing in TRN Enhances Absence Seizures in CaV3.3 −/− KO and CaV3.2 −/− /3.3 −/− Double-KO Mice.

To sidestep the possible effects of residual T-type channel activity present in the TRN of CaV3.3 knockdown and CaV3.3 −/− mice (Fig. 1F), we made double-KO (DKO) mice lacking both CaV3.2 and CaV3.3 (CaV3.2 −/− /3.3 −/− mice) (Fig. S3A). Although these mice had no bursting activity in their TRN neurons (Fisher’s exact probability, P = 0.0012) (Fig. 4A), the effect of GBL injection was similar to what we observed in CaV3.3 −/− and CaV3.3 knockdown mice [rmANOVA, group effect, F(1) = 21.35, P = 0.0004 time effect, F(57) = 13.99, P < 0.0001 Group × Time interaction, F(57) = 5.43, P < 0.0001] (Fig. 4 BF). The interspike interval within an SWD was not changed. Further, neither CaV3.3 −/− KO nor CaV3.2 −/− /3.3 −/− DKO mice exhibited spontaneous SWD (data not shown). These findings indicate that SWD can be maintained in the complete absence of TRN burst firing and, in fact, is enhanced when bursts are abolished.

Tonic Firing Is Increased in CaV3.3 −/− KO and CaV3.2 −/− /3.3 −/− DKO Mice.

To explain this unexpected observation, we considered the possibility that loss of T-type Ca 2+ currents could have other effects on TRN neurons that enhance SWDs. In contrast to the loss of TRN bursts, the frequency of tonic firing evoked by depolarizing currents was significantly increased in TRN GFP-positive projection neurons of CaV3.3 −/− mice compared with those in wild-type mice [two-way ANOVA, group effect, F(1,36) = 4.8, P = 0.035] (Fig. 5 A and B). Tonic firing was also increased in CaV3.2 −/− /3.3 −/− mice (Fig. S3 B and C). To examine the cause of the increased tonic firing, we analyzed the after-hyperpolarizations (AHPs) that are known to regulate action potential frequency. The amplitude of the fast AHP (fAHP), which plays an important role in repolarization of membrane potential after an action potential, was not significantly different between CaV3.3 −/− and wild-type neurons (Fig. 5C). Coupling between T channels and Ca 2+ -activated K + channels is known to be indispensable for oscillatory discharges of TRN neurons (23). Thus, we examined the amplitude of the medium AHP (mAHP) caused by activation of Ca 2+ -activated K + channels and found that mAHPs were significantly reduced in the mutant mice (Fig. 5D). This was also observed in CaV3.2 −/− /3.3 −/− mice (Fig. S3 D and E). Therefore, an increased tonic firing of mutant TRN neurons might be at least partially due to a decreased mAHP. In contrast, other intrinsic electrophysiological properties of TC neurons were unaffected in mutant mice (Fig. S4 A–D). These neurons also exhibited no differences in the amplitude or frequency of spontaneous inhibitory postsynaptic currents (IPSCs) resulting from TRN input (Fig. S4 E–G), which is consistent with the lack of spontaneous SWDs in CaV3.3 −/− mice.

Enhanced TRN–TC Inhibitory Synaptic Connection in the Mutant.

To determine the downstream effect of increased tonic firing in TRN neurons, we next examined the TRN–TC inhibitory synaptic connection. Whole-cell patch clamp recordings from TC neurons of CaV3.3 −/− mice, measured in the presence of the glutamate receptor blockers 6-cyano-7-nitroquinoxaline-2, 3-dione and 2-Amino-5-phosphonopentanoic acid, revealed monosynaptic IPSCs in response to electrical stimulation of TRN. There was no significant difference in the amplitude of IPSCs evoked in TC neurons by single stimuli (wild type, 274 ± 117 pA, n = 8 CaV3.3 −/− , 321 ± 131 pA, n = 9, P = 0.8). This indicates that basal synaptic strength is unchanged in the mutant. However, differences were observed in response to multiple TRN stimuli. We used five stimuli at 100 Hz or 500 Hz to mimic tonic or burst firing, respectively (Fig. 5E). Previous in vivo observations show that TRN neurons fire bursts composed of 5–15 spikes at frequencies ranging from 250 to 550 Hz (24). Kim et al. (25) demonstrated that a 100 Hz tonic discharge or a 500 Hz burst frequency of perigeniculate nucleus generates IPSPs in postsynaptic TC cells in vitro. For tonic-frequency stimulation, synaptic strength, measured as the integrated charge of IPSCs, was significantly larger in CaV3.3 −/− compared with wild-type mice (Fig. 5F). However, there was no difference between the two genotypes in their responses to burst-frequency stimulation. As a result, in the mutant IPSC, responses to tonic-frequency stimulation were significantly larger than responses to burst-frequency stimulation. To reveal how tonic-frequency stimulation increased inhibitory synaptic transmission in CaV3.3 −/− mice, we analyzed the paired-pulse ratio (PPR ratio of second/first response) of IPSCs. The PPR evoked by tonic-frequency stimulation was greater (P = 0.038) in CaV3.3 −/− (0.74 ± 0.6) than in wild type (0.55 ± 0.75) (Fig. 5G). This indicates that the increase in inhibitory synaptic transmission in CaV3.3 −/− mice is activity-dependent and presynaptic in origin, perhaps caused by the decreased mAHPs of TRN neurons. The change in PPR may reflect changes in presynaptic release probability.


Networks and Learning

So far we have discussed only rudimentary properties of individual neurons. However, the way neurons are put together into networks and trained has historically had a larger impact on their performance than the structure of a single unit.

The first practical ANN was introduced by Frank Rosenblatt in the 1950s, called a Perceptron. It did not include any hidden layers and as such was profoundly limited. However, its parallel nature allowed it to easily learn simple logic — performing logic on parallel inputs is much easier than on serial, as no memory is required to store intermediate values. Deep layers were not around yet, mostly because no one knew how to train them. Minsky and Pappert demonstrated in 1969 that this basic kind of ANN architecture can never solve linearly inseparable problems (e.g. perform logical operations like exclusive OR) and concluded that they will never be able to solve any actually interesting problem. In their book they wrongly conjured that this problem is common to all ANNs, which caused a huge decrease of interest/funding in ANNs, essentially halting the research in the area for almost two decades.

It took until the 1980s to salvage what was left of ANNs. An international group of researchers (David Rumelhart, Jeff Hinton and others) came up with a method to train deep ANNs called backpropagation. This immediately disproved the conjecture of Minsky and Pappert and kicked-off a large research effort into ANNs. It is worthwhile to note that backpropagation is perhaps the aspect of ANNs most widely criticised for being biologically implausible.

Backpropagation relies on propagating errors backwards through a deep ANN to correct/train the deep layers. The algorithm of backpropagation roots in automatic differentiation, a method developed by Newton at a time when there was essentially no understanding of the biology underlying intelligence. Claiming biological inspiration as its inception would therefore be silly.

The main criticism of the biological plausibility of backpropagation is focused on the requirement to feed signals backwards, which biological neurons are known to be unable to do (This is not entirely true as some marginal cases of backwards signalling have been reported). However, neuroscientists have known for a long time that neurons are not a uniform mass of homogeneous material, but instead they repeatedly appear in structured assemblies that have feedback connections [2]. Therefore it is entirely plausible that at least in some brain areas neurons exist in pairs where one feeds forwards while the other one is feeding back making backpropagation entirely possible within a neural assembly. This being said, there has been no convincing demonstration of backpropagation of error in brain so far — something we should expact to be relatively easy to observe if it is a prominent feature of biological neural networks.

Backpropagation was a necessary advancement for deep neural networks, as the depth is one of the most powerful concepts brought to ANNs. Depth allows for hierarchical representation of a problem — solving smaller problems (e.g. detection of edges) first before moving onto bigger ones (e.g. recognizing an object from a collection of edges). Hierarchical nature is also a prominent feature of information processing in brains [16], providing an elegant alignment between biological and artificial intelligence.

Another large improvement to the capabilities of ANNs was the introduction of convolution as one of the main components [13]. Convolution essentially means moving a filter across the data to identify features. This brought a large advantage to the training of ANNs as it hugely decreases the number of free parameters to be tweaked. Interestingly, it is well evidenced that similar principles are being applied in the brain [6], and is perhaps are one of the crucial phenomena allowing brain to process visual stimuli very efficiently.

So far we have seen that the criticism of backpropagation for it’s biological implausibility is shaky (while no evidence for backpropagation within biological neural networks has been found so far), and that convolution seems biologically well founded. However, the last aspect of contemporary ANNs does not follow this line of reasoning. Initialization of weights by an identity matrix [12] is a technique that facilitates the training of deeper networks. It works because initially the deep layers just pass on their inputs unaltered — a better starting point than learning from a random complex transformation. As you might expect this doesn’t lend itself to any meaningful comparison with the brain, illustrating that biology, while being a useful inspiration to ANN architecture is by far not the only means of advancement in AI.

In conclusion, the main advances driving the current success of ANNs are focused on how ANNs are trained, not what exactly they are. These advances are a mix of mainly pragmatic engineering choices and some questionable biological inspiration which is also a good engineering choice.


Should people still get the new mRNA vaccine?

The appearance of this new B.1.1.7 makes it even more important that people get vaccinated as soon as possible.

If this new version is more transmissible, or if the vaccine is less effective because of a virus-vaccine mismatch, more people will need to be vaccinated to achieve herd immunity and get this disease under control.

Moreover, we now have proof that the spike protein of SARS-CoV-2 can change drastically in a short time, and so it is critical that we get the virus under control to prevent it from evolving further and completely undermining vaccination efforts.


4. Neuromorphic Hardware

There is a big discrepancy between the promise of efficient computing with SNNs and the actual implementation on currently available computing hardware. Simulating SNNs on von Neumann machines is typically inefficient, since asynchronous network activity leads to a quasi-random access of synaptic weights in time and space. Ideally, each spiking neuron is an own processor in a network without central clock, which is the design principle of neuromorphic platforms. The highly parallel structure, sparse communication, and in-memory computation proposed by SNNs stands in contrast to the sequential and central processing of data constrained by the memory wall between processor and memory on CPUs and GPUs. The computational efficiency of SNNs can be observed in brains that can solve complex tasks while requiring less power than a dim light bulb (Laughlin and Sejnowski, 2003). To close the gap in terms of energy-efficiency between simulations of SNNs and biological SNNs in the last decade several neuromorphic hardware systems were developed which are optimized for execution of SNNs (see Table 1 for a review of technical specifications see Furber, 2016 Singh Thakur et al., 2018). The energy-efficiency of neuromorphic systems makes them ideal candidates for embedded devices subject to power constraints, e.g., mobile phones, mobile and aerial robots, and internet of things (IoT) devices. Furthermore, neuromorphic devices could be utilized in data centers to reduce the cost of cloud applications relying on neural networks.

Table 1. This table lists built neuromorphic systems for which results with deep SNNs on classification tasks have been shown (for extended lists of hardware systems that may potentially be used for deep SNNs see, e.g., Indiveri and Liu, 2015 Liu et al., 2016).

Inspired by biology, neuromorphic devices share the locality of data to reduce on-chip traffic, mostly reflected by using spikes for communication and limiting the fan-in of neurons. The massive parallelism of neuromorphic devices manifests itself in the physical representation of neurons and synapses on hardware inspired by the seminal study of Mead (1990) (for a review, see Indiveri et al., 2011). Analog neuromorphic systems, which implement functionalities of spiking neurons and synapses with analog electronic circuits, usually have a one-to-one mapping between neurons and synapses in the network description and on hardware. In contrast, digital systems implement the parallel structure less fine-grained by grouping and virtualizing neurons on cores (hundreds for the TrueNorth and thousands for the SpiNNaker system, see also Table 1). However, compared to the extensive virtualization on CPUs and GPUs, i.e., the total number of neurons in a network divided by the number of cores, the virtualization on neuromorphic systems is rather low. This leads to less flexibility in terms of connectivity and size of networks, and thus hardware demonstrations that show functional deep SNNs are few. All hardware systems listed in Table 1 share an asynchronous computation scheme that enables computation on demand and reduces power consumption in case of low network activity.

In principle, neuromorphic hardware could be used for both training and inference of SNNs. While original and constrained DNNs (section 3.3) can usually be trained on GPUs and are then converted to SNNs (section 3.2), spike-based training (section 3.4) and especially local learning rules (section 3.5) are computationally more expensive on von Neumann machines and, hence, could highly benefit from hardware acceleration. However, so far, spike-based training and local learning rules have not been shown for competitive deep networks. Rapid developments in this area of research makes it difficult to build dedicated hardware for training, since their design and production is time-consuming and costly (see also section 6).

4.1. Inference on Neuromorphic Hardware

Once the parameters of SNNs are obtained by any of the training methods reviewed in section 3, usually these networks have to be adapted to the specific hardware system to be used for inference. Analog neuromorphic systems suffering from parameter variation may require cumbersome fine-tuning of pre-trained networks with the hardware system in-the-loop (Schmitt et al., 2017). This is not always practical, because the re-configuration of neuromorphic systems is often slow compared to, for example, CPUs and GPUs. Another common approach to improve the test performance is to incorporate hardware constraints like, for example, limited counts of incoming connections and quantized weights into the training process (section 3.3). Once parameters are trained and the device is configured, inference is usually fast and energy-efficient due to their optimization for spike input and output. To our knowledge only for the TrueNorth, SpiNNaker and BrainScaleS hardware system results were shown, in which deep SNNs on silicon chips were used for classification tasks with the complexity of at least MNIST (for hardware specifications and classification performances, see Table 1). For other promising neuromorphic systems no results for deep SNNs are shown yet (Park et al., 2014 Lin et al., 2018), or the presented neuron and synapse count is too small to show competitive results (Pfeil et al., 2013a Schmuker et al., 2014 Indiveri et al., 2015 Qiao et al., 2015 Moradi et al., 2017 Petrovici et al., 2017). Prototypical software implementations and field-programmable gate array (FPGA) systems are not considered in this study. As an exception, we would like to mention the novel Intel Loihi chip (Davies et al., 2018), for which results of a single layer network on preprocessed MNIST images on a prototypical FPGA implementation are shown (Lin et al., 2018). Once commissioned, Loihi's large number of neurons, their connectivity and configurability, and on-chip learning capabilities could be a good basis to enable deep networks on raw image data. Table 1 shows deep SNNs on the SpiNNaker and BrainScaleS systems that approximate multi-layer perceptrons (MLPs) and rate-based deep belief networks (DBNs), respectively, showing network activity like exemplarily plotted in Figure 1D. In contrast, deep CNNs are binarized for their execution on the TrueNorth system (compare to Figure 1C). This means that neuron activations on TrueNorth are represented by single spikes and each neuron in a network is stateless and fires at most once for each input. In other words, spikes do not contain temporal information anymore, but the high throughput makes inference energy-efficient.

Are the presented neuromorphic systems more power-efficient than GPUs? The answer to this question very much depends on the chosen benchmark task, and we can give only approximate numbers for frame-based classification tasks (for further discussions see section 6). Since power measurements on modern mobile GPUs (Nvidia Tegra X1) are only reported for large networks (AlexNet) on comparably large images from the ImageNet dataset (NVIDIA Corporation, 2015), and power numbers of the most efficient neuromorphic system are recorded for custom networks on smaller images from the CIFAR10 dataset (Esser et al., 2016), a straight-forward comparison is not possible. However, if we assume a linear decrease in the number of operations with the area of the input image, which is approximately true for convolutional networks, the reported energy of 76 mJ for GPUs to process an image of size 224 × 224 scales down to approximately 2 mJ for an image from the CIFAR10 dataset with size 32 × 32. This energy consumption is approximately one order of magnitude higher than for the most power-efficient neuromorphic solution, i.e., binarized DNNs on the TrueNorth system (for numbers see Table 1). Since the energy consumption of most neuromorphic systems is dominated by that of synaptic events, i.e., communication and processing of spikes, higher benefits are expected for models that exploit sparse temporal codes, rather than rate-based models.

4.2. On-Chip Learning

Although unified methods to train SNNs are still missing, the SpiNNaker and BrainScaleS hardware systems implement spike-timing-dependent plasticity (STDP), a local unsupervised learning rule inspired by biology. Synaptic weights are updated by means of local correlations between pre- and postsynaptic activity (see also section 3.5). Neuromorphic systems are valuable tools to investigate such local learning rules, because the training of networks with STDP often requires long simulations of SNNs in terms of biological time, and neuromorphic systems usually accelerate such simulations compared to conventional computers. The BrainScaleS system (Schemmel et al., 2010) and its successor (Aamir et al., 2018) is an especially promising candidate for on-chip learning due to its acceleration of up to a factor of 10000 compared to biological real time, but so far STDP is only shown for small networks on a prototype chip (Pfeil et al., 2013b) and shows significant parameter variation due to imperfections in the production process (Pfeil et al., 2012). In addition, Friedmann et al. (2017) investigated the integration of on-chip plasticity processors into the BrainScaleS system to modulate STDP based on the model of neuromodulators in biology (Pawlak et al., 2010) allowing for supervised training. Although the implementation of STDP is in terms of chip area costly for the presented neuromorphic systems, novel electronic components, so called memristors, may allow for much higher densities of plastic synapses (Jo et al., 2010 Saïghi et al., 2015 Boyn et al., 2017 Burr et al., 2017 Pedretti et al., 2017).


Club mosses

The club mosses include around 400 species of lycophytes from the class Lycopsida. The vast majority of species are found within a single genus known as Huperzia, which are sometime referred to as the fir mosses.

They are found all around the world, most commonly growing in rainforests on the trunks of trees but some species inhabit Arctic regions and the southern end of South America. The most significant difference between club mosses and other lycophytes is that club mosses only have one type of spore.


MATERIALS AND METHODS

Ethics statement.

All animal handling and procedures were in accordance with the National Institutes of Health animal welfare guidelines and were approved by the George Mason University institutional animal care and use committee.

D1 and D2 MSN models.

D1 and D2 MSN models were generated by modifying a previously published MSN model (Evans et al. 2012). The morphology of both MSN models was the same as this previous model, except with the spines removed (to improve computational efficiency), and consisted of 189 compartments with 4 primary dendrites which divide into 8 secondary and then 16 tertiary dendrites. Each primary dendrite was 20 μm long, secondary dendrites were 24 μm long, and tertiary dendrites were composed of 11 compartments, each 36 μm long. The kinetics of the channels included in the model were identical to the previous model (Evans et al. 2012). D1 and D2 MSN models were created by changing the maximal conductance of intrinsic and synaptic channels from values used for our previous MSN model (Table 1), based on experimental data measuring the effect of D1 or D2 receptor agonists, as summarized in Nicola et al. (2000) and Moyer et al. (2007). The maximal conductances of the α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) and N-methyl- d -aspartate (NMDA) receptors of both MSN classes were additionally adjusted to maintain the NMDA-to-AMPA ratio of 2.75:1 measured in cortico-striatal terminals from dorsal striatum of adult animals (Ding et al. 2008), as used previously (Evans et al. 2012). Note that this value is considerably greater than the NMDA-to-AMPA ratio measured in the striatum from other striatal regions (ventral −0.22 Popescu et al. 2007) or from younger animals (∼1.0 Logan et al. 2007).

Table 1. Modulation of channel conductances

Values are % conductance value reported in Evans et al. (2012).

For simulations that investigate the contribution of morphology differences between D1 and D2 MSNs in explaining differences in excitability, the number of primary dendrites for D1 MSNs was increased from 4 to 6 based on reconstructions of these neurons (Gertler et al. 2008). The intrinsic channel properties of both MSN classes were set to match those of a D2 MSN for these simulations since this produced frequency-current (F–I) curves that matched experimental results most accurately.

FSI network.

A previously published FSI network model was used (Hjorth et al. 2009) and extended to include chemical synapses, as seen experimentally (Gittis et al. 2010). Each FSI in this network consisted of 127 compartments with a soma and 2 primary dendrites, which divide into 4 secondary and 8 tertiary dendrites. The channels incorporated in this model included a fast-sodium channel (NaF), voltage-dependent K + (Kv) 3.1, Kv1.3 and slowly inactivating A-type (transient) potassium channel (KAs) (Kotaleski et al. 2006). The gap junction connections between the FSIs were modeled as resistive elements between the primary dendrites, with a conductance of 0.5 nS, coupling coefficient of 25% and probability of gap junction connection between nearby FSIs of 0.3 (Galarreta and Hestrin 2002 Hjorth et al. 2009 Koós and Tepper 1999). Chemical synapses were GABAergic, chloride-permeable channels [rise time constant, 1.33 ms decay time constant, 4 ms reversal potential, −60 mV maximal conductance, 1 nS (Gittis et al. 2010)]. The probability of chemical synapse connection between FSIs was 0.58 (Gittis et al. 2010) and was independent of the probability of gap junction connection.

Striatal network.

The striatal network consisted of 1,000 MSNs (500 D1 MSNs and 500 D2 MSNs) and 49 FSIs (see Fig. 2A). The MSN-to-FSI ratio was based on experimental observations: each MSN receives input from 55% of nearby striatal FSIs (Tecuapetla et al. 2007), and between 4 and 27 converge on the same MSN (Koós and Tepper 1999). Based on these estimates, the 49 neuron FSI network corresponded to the FSI network seen by 1,000 postsynaptic MSNs. Although the percentage of FSIs is slightly larger than observed experimentally, a smaller number of FSIs would have incorrectly produced homogenous FSI input to each MSN in the network model. A heterogeneous network of neurons was generated by changing the KAs conductance (both MSNs and FSIs) and NMDA channel conductance (MSN only) by ±10%. The range of activity of MSNs used in the network, in response to current injections, was within the range of experimentally observed responses (see Fig. 2B electrophysiology methods described below). The distance between each MSN soma in the model was 25 μm both in the x-axis and the y-axis based on experimental observations (Tunstall et al. 2002), resulting in a 775 × 775 μm 2 grid. At each grid location, the assignment of either D1 or D2 MSNs was random with probability = 0.5.

The probabilities of connection of MSN-MSN and FSI-MSN synapses were modeled using a distance-based exponential function, tuned to match experimentally observed probabilities of connections (Gittis et al. 2010 Planert et al. 2010 Plenz 2003 Taverna et al. 2008): P ( x ) = e − [ ( x 2 − x 1 ) 2 − ( y 2 − y 1 ) 2 ] / f where f = 95 μm 2 . FSIs were connected to MSNs with GABAergic synapses [rise time constant, 0.25 ms decay time constant, 3.65 ms reversal potential, −60 mV maximal conductance, 8.4 nS (Gittis et al. 2010)], whereas the GABAergic synapses between MSNs had a maximal conductance of 0.75 nS with the same rise and decay times (Koos et al. 2004). Note that GABAergic synapses in MSNs are depolarizing due to the hyperpolarized (−80 to −90 mV) resting membrane potential of mature MSNs (Wilson and Kawaguchi 1996), coupled with GABA responses which reverse between −60 and −50 mV (Kita 1996 Mercuri et al. 1991 Tunstall et al. 2002). The FSI-MSN synapses also were more proximal than MSN-MSN synapses (Oorschot et al. 2010). The probability and strength of connection of MSN-MSN and FSI-MSN synapses in the network model were independent of the type of pre- or postsynaptic MSN (Planert et al. 2010). The transmission delays were distance-based using a conduction velocity of 0.8 m/s for both FSI and MSN synapses (Tepper and Lee 2007 Wilson 1986 Wilson et al. 1990).

Extrinsic synaptic input.

Excitatory input to the striatum comes primarily from the cortex and thalamus. We simulated this glutamatergic input as Poisson distributed spike trains with a minimum time between spikes of 100 μs. Both MSN classes in this model have 360 AMPA and NMDA synapses and 227 GABA synapses. Note that each Poisson train represents activity from more than one cortical neuron, and each synapse represents the population of synapses in a single isopotential compartment. Thus the 100-μs minimum interspike interval is to prevent more than 1 spike per time step to each MSN. Each synaptic channel in the MSN model receives an input of 10 Hz during the upstate, which results in a total input of ∼800 synaptic inputs per second and 1/20 of this input during down states (Blackwell et al. 2003). Each FSI model has 127 AMPA synapses and 93 GABA synapses, resulting in a total AMPA input of 282 Hz and GABA input of 207 Hz, when 2 Hz input is provided to each synapse (Kotaleski et al. 2006). To introduce correlations within both the MSN and FSI input, each spike from the set of spike trains was assigned to more than one synapse, with probability P = 1/n, where n = N − c ( N − 1 ) , where N = number of synapses, and c = 0.5 (Hjorth et al. 2009). For each neuron, starting from a single mother spike train per neuron, spike trains for each synapse were created by assigning the spike to that synapse if a uniform random number was greater than P. This produced a mean number of synapses activated by each spike of 1.4 for the control simulations. To introduce between-neuron input correlation, an additional shared set of input spikes was generated. The between-neuron input correlation was then adjusted by changing the fraction of input each neuron received from this shared pool (as opposed to the spike trains that were unique to each neuron). Unless otherwise stated, 30% of all excitatory synaptic input to either the FSI or MSN populations was shared with FSIs and MSNs each having their own sets of shared inputs. This base level of between-neuron correlation was incorporated based on studies that report correlation among the cortical input to the striatum (Krüger and Aiple 1988 Stern et al. 1998 Ts'o and Gilbert 1988). For the case where synchronized cortical input was provided to FSIs to compensate for lack of gap junctions, the correlation value was doubled for FSI inputs only.

The model was implemented in GENESIS (Bower and Beeman 2007), and simulations used a time step of 100 μs. The simulation time was 2 s with five 0.2-s duration upstates separated by 0.2-s duration downstates. Each upstate used a different set of cortical input spikes and thus was an independent observation of the network response. Each network simulation experiment took 3 wk to run. The cortico-striatal Poisson spike trains were generated using MATLAB (version 2007b, MathWorks). The simulation and output processing software, along with the files used for the simulations, are freely available from the authors' website (http://krasnow1.gmu.edu/CENlab/) and modelDB (http://senselab.med.yale.edu/ModelDB/).

Analysis of spikes.

Mean firing frequency during upstates was plotted by averaging across neurons of the same class using 10-ms time bins. The firing frequency was expressed as mean ± SD of values obtained from five different upstates. To investigate the contribution of gap junctions on synchronization, cross-correlograms were constructed for each directly coupled neuron pair in the FSI network and then averaged over the network (Hjorth et al. 2009). Cross-correlograms also were constructed for directly coupled MSN pairs. Correlation was corrected for firing frequency by subtracting the shuffled cross-correlograms for the same network condition. Statistical analyses were performed using SAS. When only two groups were being compared, the procedure TTEST was used. When more than two groups were compared, one-way analysis of variance was performed using the GLM procedure with network condition as the independent variable and difference between D1 and D2 MSN frequencies as the dependent variable. Each of the five upstates was considered as an independent replication, and P < 0.05 was considered significant. Post hoc analyses used Bonferroni correction for multiple comparisons with P < 0.01 considered significant.

Electrophysiology for model tuning.

Patch-clamp recordings were performed to obtain a range of responses of MSNs to somatic current injection (Fig. 2B). C57B6 male and female mice (at least 20 days old) were anesthetized with isoflurane and decapitated. Brains were quickly extracted and placed in oxygenated ice-cold slicing solution (in mM: KCl 2.8, dextrose 10, NaHCO3 26.2, NaH2PO4 1.25, CaCl2 0.5, Mg2SO4 7, sucrose 210). Hemicoronal slices from both hemispheres were cut 350-μm thick using a vibratome (Leica VT 1000S). Slices were immediately placed in an incubation chamber containing artificial cerebrospinal fluid (in mM: NaCl 126, NaH2PO4 1.25, KCl 2.8, CaCl2 2, Mg2SO4 1, NaHCO3 26.2, dextrose 11) for 30 min at 33°C, then removed to room temperature (21–24°C) for at least 90 more minutes before use.

A single hemislice was transferred to a submersion recording chamber (ALA Science) gravity-perfused with oxygenated artificial cerebrospinal fluid containing 50 μM picrotoxin. Temperature was maintained at 30–32°C (ALA Science) and was monitored with an external thermistor. Whole cell patch-clamp recordings were obtained from neurons under visual guidance using infrared differential interference contrast imaging (Zeiss Axioskop2 FS plus). Pipettes were pulled from borosilicate glass on a laser pipette puller (Sutter P-2000) and fire-polished (Narishige MF-830) to a resistance of 3–7 MΩ. Pipettes were filled with a potassium-based internal solution (in mM: K-gluconate 132, KCl 10, NaCl 8, HEPES 10, Mg-ATP 3.56, Na-GTP 0.38, EGTA 0.1, biocytin 0.77) for all recordings. Intracellular signals were collected in current clamp and filtered at 3 kHz using an Axoclamp2B amplifier (Axon Instruments) and sampled at 10–20 kHz using an ITC-16 (Instrutech) and Pulse version 8.80 (HEKA Electronik). Series resistance (6–30 MΩ) was manually compensated.


Background

Neural responses in visual cortex depend not only on sensory input but also on behavioral context. One such context is locomotion, which modulates single-neuron activity in primary visual cortex (V1). How locomotion affects neuronal populations across cortical layers and in precortical structures is not well understood.

Results

We performed extracellular multielectrode recordings in the visual system of mice during locomotion and stationary periods. We found that locomotion influenced activity of V1 neurons with a characteristic laminar profile and shaped the population response by reducing pairwise correlations. Although the reduction of pairwise correlations was restricted to cortex, locomotion slightly but consistently increased firing rates and controlled tuning selectivity already in the dorsolateral geniculate nucleus (dLGN) of the thalamus. At the level of the eye, increases in locomotion speed were associated with pupil dilation.

Conclusions

These findings document further, nonmultiplicative effects of locomotion, reaching earlier processing stages than cortex.


Inflorescence

The arrangement and distribution of flower in the axis of plant is called inflorescence. The supporting stalk in inflorescence is known as peduncle. The supporting stalk of individual flower is called pedicel.
Solitary terminal is the inflorescence in which single flower of the terminal part of growth.
Solitary axillary: If single flower develops from the axis of leaves and branches of plant, then the inflorescence will be solitary axillary.
Based on the mode of distribution and origination of flower, in plant, inflorescence can be categorized as:

    Racemose:
    In this type of inflorescene, the main axis of inflorescene does not terminate in flower but continuous to scene does not terminate in flower but continuous to grow and fives flowers laterally. The flower at lower or outer side is older and upper or inner flowers are younger. It is termed as arrangement of flowers in zeropetal or centripetal succession.It is of further following types:

  1. Raceme
    In this inflorescene, the main axis is long and bears laterally flowers of equal length. E.g. mustard.
  2. Spike
    It consists of a long laterally like in racemose. E.g. Amarenthus, etc
  3. Spikelet
  4. Catluin or Amentum:
    In catluim, the flowers are arranged like that of spike but consists of more compactly arranged and unisexual flowers. The axis is long and pendulous. All the flowers of certain mature at the same time and fall as a unit. E.g. Mulberry,etc.
  5. Spadix
    It is also like spike but axis is fleshly and whole axis is enclosed by one or more large bracts called spathes. E.g. banana. The upper part of it consists of flowers.
  6. Corymb
    The arrangement of flowers in corymb is like in spike but the main axis is short and the pedicals of flowers are of varying length so that they are on the same level. E.g. candytuft. In some cases, the main axis is long and upper flower form corymb whereas lower form raceme. E.g. mustard.
  7. Umbel
    It consists of a very short axis. All the flowers have long stalk arising from the same point e.g. Cantella.In some cases, a flower is represented by an umber and is called compound umbel. It is foung in Coriandrum.
  8. Head or capitulum
    In this inflorescene, the main axis is compressed and forms a convex structure called receptacle. On the receptacle, sessile less flowers (florets) are arranged in a centripetal order. The whole inflorescene is surrounded by an involucre of bracts. The members of family compositae like sunflower, marigold, etc bear it.

    Monochasial
    The main axis ends in a flower and only are lateral bud grows and again ends in a flower. It is of two types:


Watch the video: Hammer Fired vs Striker Fired Pistols (January 2023).