ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • Emotiv Test Bench Manual High School
    카테고리 없음 2020. 2. 16. 23:49
    1. Manual High School Indianapolis
    2. Manual High School New Mexico
    3. Ips Manual High School

    1,2., 1,2, 1,2 and 1,2,3. 1ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia. 2Department of Cognitive Science, Macquarie University, Sydney, Australia.

    3Department of Psychology, Macquarie University, Sydney, Australia An estimated 30% of individuals with autism spectrum disorders (ASD) remain minimally verbal into late childhood, but research on cognition and brain function in ASD focuses almost exclusively on those with good or only moderately impaired language. Here we present a case study investigating auditory processing of GM, a nonverbal child with ASD and cerebral palsy. At the age of 8 years, GM was tested using magnetoencephalography (MEG) whilst passively listening to speech sounds and complex tones. Where typically developing children and verbal autistic children all demonstrated similar brain responses to speech and nonspeech sounds, GM produced much stronger responses to nonspeech than speech, particularly in the 65–165 ms (M50/M100) time window post-stimulus onset.

    GM was retested aged 10 years using electroencephalography (EEG) whilst passively listening to pure tone stimuli. Consistent with her MEG response to complex tones, GM showed an unusually early and strong response to pure tones in her EEG responses. The consistency of the MEG and EEG data in this single case study demonstrate both the potential and the feasibility of these methods in the study of minimally verbal children with ASD. Further research is required to determine whether GM's atypical auditory responses are characteristic of other minimally verbal children with ASD or of other individuals with cerebral palsy.

    Introduction According to recent estimates, around 30% of individuals with autism spectrum disorders (ASD) remain nonverbal or minimally verbal despite intervention (;; ). A significant proportion of these individuals never speak, while others remain at the stage of echolalia or have a limited repertoire of fixed words and phrases that may be communicated through alternative/augmentative communication systems.

    Yet the vast majority of research on cognition and brain function in ASD focuses on high-functioning individuals with age-appropriate or only mildly-impaired language and cognitive abilities. This reflects the practical difficulties of testing these profoundly affected individuals, as well as concerns that results may be compromised by failure to understand task instructions or comply with task demands.

    However, it is questionable whether insights gained from studies of linguistically able individuals with ASD may be extrapolated to those who are minimally verbal. To conduct research with minimally verbal children with ASD, it is important to develop valid measures that do not depend upon the ability to understand task instructions or comply with task demands. In principle, neurophysiological techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) are well suited to this purpose. Electroencephalography reflects electrical activity from populations of synchronously firing neurons , while MEG measures the corresponding magnetic fields (; ).

    Both techniques are safe, noninvasive, and silent, and can provide insights into the neural mechanisms underpinning cognitive function with millisecond precision. Importantly, EEG and MEG responses can often be recorded passively while the participant is engaged in another activity, thereby avoiding concerns about confounding influences of poor task understanding and poor attention. MEG and EEG offer complementary strengths. MEG has superior spatial resolution because the brain's magnetic fields are not “smeared” or distorted by the brain, scalp, and skull, and are less prone to physiological noise compared to EEG (; ). This allows for cleaner extraction of brain responses that are simpler to interpret.

    MEG set up is relatively easy and requires no physical contact with sensors, and so is well tolerated by verbal children with ASD (;; ). On the other hand, EEG is much cheaper and more widely available, making it a more realistic tool for large-scale multi-site studies and clinical applications. Despite their considerable potential, MEG and EEG studies of profoundly affected individuals with ASD are rare.

    To date, such studies have focussed on auditory processing of simple tone stimuli (;; ). Using MEG, tested 8- to 32-year-old autistic individuals with “moderately to severely impaired” verbal communication (according to the Childhood Autism Ratings Scale). Relative to typically developing control participants, they showed a normal M100 response to the onset of tones, but a weak or absent mismatch response to rare sounds in the sequence. In contrast, found no evidence of group differences in the mismatch response or subsequent P3a response. Participants were described as having “low functioning autism” and “mental retardation,” but unfortunately no further details were provided regarding their language proficiency or if they were nonverbal. The current paper adds to this extremely sparse literature on auditory processing in minimally verbal individuals with ASD. We present a case report of GM, a young autistic girl with cerebral palsy who, at the time of writing, has never spoken.

    When GM was 8 years and 10 months old, we had the opportunity to measure her brain responses to vowels sounds and complex tones using MEG. Two years later, we were able to re-test GM, this time using a novel “gaming” EEG headset that has been adapted for research purposes.

    Together, the two experiments indicate that GM has a highly unusual pattern of brain responses, characterized by atypically strong responses to nonspeech sounds, but weak responses to speech. This case report demonstrates, we believe, the feasibility and potential of both EEG and MEG for the study of minimally verbal individuals with ASD as well as those with cerebral palsy. Background GM is a young girl with ASD and cerebral palsy. At the time of testing for Experiment 1, she was 8 years and 10 months old. By the time of Experiment 2, she was 10 years and 10 months old. Although she does vocalize, she has never spoken in words, and currently uses an augmentative and alternative communication system on the iPad with prompting from her mother to communicate.

    She attends a school for children with special needs. Other than her cerebral palsy, GM has no history of brain injury or epilepsy. She has no history of ear infections, and was not on medications at the time of either testing session. Her family speaks Australian English at home. GM was diagnosed with cerebral palsy (spastic diplegia) aged 18 months. She has global developmental delay and did not walk until after her third birthday.

    Her mother reports that, as an infant, she had good eye contact and social communication development but lost this at around 18 months. Her diagnosis of DSM-IV Autistic Disorder was conferred by a developmental pediatrician at 59 months.

    Under DSM-5, she would, therefore, automatically qualify for a diagnosis of ASD. GM's ASD diagnosis was further supported by her “Lifetime” score of 29 on the Social Communication Questionnaire (SCQ; ), which is well above the threshold of 15 for suspected ASD. Module 1 of the Autism Diagnostic Observation Schedule was administered but discontinued because GM showed distress early in the assessment and increased frustration when expected to play. During ADOS administration, she failed to engage in any of the activities, and did not partake in imitation, free play, and reciprocal interaction.

    While she vocalized sporadically, she did not initiate, engage in, or respond to speech directed at her. Cognitive Abilities and Adaptive Behavior During the testing session for Experiment 1, we attempted to administer a number of standardized tests including the Peabody Picture Vocabulary Test—4th Edition and the Matrices subtest of the Wechsler Intelligence Scale for Children. Testing using the standard procedures was unsuccessful, largely due to GM's severe communication challenges and her lack of engagement with the tasks. However, GM's mother was able to provide a report from a Clinical Psychologist and Senior Clinical Neuropsychologist of an assessment conducted at age 8 years and 2 months using modified procedures. Relevant sections from the report are reproduced below, with the caveat, noted by the clinicians, that the results of testing may have under-represented GM's true abilities.

    “The administration of assessment protocol was adapted due to the severity of GM's attention and expressive language difficulties. Task instructions were often repeated and the examiners pointed to relevant stimuli to help GM focus. Tasks were selected that allowed GM to point to her answer and tasks that required a single word or two word response, that GM could type on a computer or her iPad.” “The nonverbal subtests on the WISC were administered to assess GM's level of intellectual functioning The Block Design subtest could not be administered because of GM's motor difficulties GM's visual processing and abstract reasoning ability were found to fall within the ‘extremely low’ range. The results indicated that GM's performance/nonverbal skills were consistent with mild to moderate level of intellectual disability.” “GM's understanding of vocabulary was measured with the PPVT On formal testing, her performance was consistent with a 3–4 year age level.” “GM's mother completed the Adaptive Behavior Assessment System—2nd Edition which assesses a child's level of independence in everyday living including the areas of communication, daily and community living skills, social and leisure, functional pre-academics, and motor skills. GM's skills overall were in the significantly delayed or ‘extremely low’ range. There was no significant variation evident in her overall level of functioning.” Auditory Sensory Processing Given the study's focus on auditory processing, GM's mother also completed the Short Sensory Profile , a parent questionnaire that addresses the sensory processing of the child in everyday situations. GM scored within the typical range for the Tactile, Taste/Smell, Movement and Visual/Auditory Sensitivity items.

    She scored within the Probable Difference range for the Underresponsive/Seeks Sensation and Auditory Filtering items and within the Definite Difference range in the Low Energy/Weak section, which relates to under-responsiveness to vestibular and proprioceptive sensation. Within the auditory items specifically, she was reported to have never responded negatively to unexpected or loud noises, nor to hold hands over ears, or have trouble completing tasks when the radio is on. However, she was reported to be occasionally distracted or have trouble functioning in noisy environments. Further, she was reported to not hear people, not respond to her name being called, and have difficulties with attention. Written informed consent was obtained from the mother of the patient for publication of this Case report. A copy of the written consent is available for review by the Editor-in-Chief of this journal. Experiment 1 In Experiment 1, we used MEG to investigate GM's brain responses to speech and nonspeech sounds.

    Procedures for this experiment and Experiment 2 were approved by the Macquarie University Human Research Ethics Committee. Written consent was obtained from parents of all participants, who were given a modest amount of money, a small prize, and a certificate for their participation. Participants At the time of testing, GM was 8 years and 10 months old. Her brain responses were compared to those of 18 typically developing (TD) children (15 boys) and 13 verbal children with ASD (11 boys), aged between 6 and 14 years, who were tested as part of a separate study. All children spoke English as a first language and had normal hearing as determined using an Otovation Amplitude T3 series audiometer. All children with ASD had reports from psychologists or pediatricians confirming their DSM-IV and/or ICD-10 diagnosis of an ASD.

    In addition, they all scored above the Autism cut-off on the SCQ. All children in the ASD group (“Verbal ASD”) had phrase speech, although performance on standardized language assessments varied widely, as shown in Table. Typically developing children scored below the Autism cut-off on the SCQ, and reported no history of brain injury, ASD, language impairment, or developmental disorders in their family. Stimuli Stimuli were 200-ms long with 5-ms ramps at the start and end to avoid clicks and distortions to the sounds. The speech stimulus was a natural sounding English vowel /a/. The nonspeech stimulus was a complex tone created using Adobe Audition to match the first three formants of the speech sound (see Table for stimuli characteristics).

    The main difference between the two sounds was the presence of a fundamental frequency (F0) in the speech stimuli, which gave the speech sounds their “speechiness.”. Stimuli were presented binaurally at 75 dB SPL via earphones attached to rubber air tubes (Model ER-30, Etymotic Research Inc., Elk Grove Village, IL). Children were presented eight blocks of 100 speech stimuli interleaved with eight blocks of 100 nonspeech stimuli. The stimulus onset asynchrony (SOA) was jittered between 900 and 1100 ms.

    The stimuli were presented in an oddball paradigm originally designed to elicit a mismatch field. Each block of 100 sounds included 85 frequently occurring “standard” sounds and 15 rarely occurring “deviant” sounds (a 10% increase in the frequency of F1, F2, and F3 relative to the standard sound). However, like other researchers, we found that the mismatch response was not reliably elicited at the individual level (;;;; ). Thus, following past research, our analyses focused on the obligatory brain responses to the onset of the standard stimuli (; ).

    MEG Recording MEG data were recorded using 160 coaxial first-order gradiometers with a 50 mm baseline (Model PQ1160R-N2, KIT, Kanazawa, Japan;; ). MEG data were acquired with a sampling rate of 1000 Hz and filter bandpass of 0.03–200 Hz. Prior to MEG recording, each child was fitted with an elasticized cap containing five marker coils. The positions of the coils and the shape of the participant's head were measured with a pen digitizer (Polhemus Fastrack, Colchester, VT). Head position was measured with the marker coils before and after each MEG recording, and children were visually monitored for head movements. If the authors detected movement from the child, data recording for that block was aborted and marker coils re-measured.

    Children who exceeded head-movement of 5 mm were excluded from further analyses. During the recording, participants watched a silent subtitled DVD of their choice projected on a screen on the ceiling of the MEG room while lying on a comfortable bed inside the magnetically shielded room. MEG Data Processing MEG data were processed using BESA 6.0 software (MEGIS Software GmbH, Grafelfing, Germany). The data were filtered between 0.1 and 30 Hz, epoched from −100 ms pre-stimulus onset to 500 ms post-stimulus onset, and baseline corrected from −100 to 0 ms. Epochs with gradient artifact (including blinks and eye-movements) greater than 5336 fT/cm were identified using the artifact-rejection tool in BESA, and excluded from further analysis. All participants had at least 75% artifact-free epochs for each condition.

    On average, there were 542 accepted epochs for speech sounds and 538 for nonspeech sounds in the control group. For GM, there were 448 accepted epochs for speech sounds and 494 for nonspeech sounds. Data were first analyzed at the sensor level by computing the Global Field Power (GFP, ). This involved transforming the speech and nonspeech waveforms for each of the 160 sensors to absolute values and then averaging across the 160 channels to obtain a whole head response (cf.

    This procedure avoids bias that may arise from picking a group of channels and complements analyses conducted in source space. Magnetic GFP also strongly corresponds with fitted dipoles in terms of strength and latency, and is considered a good representation of underlying brain activity from the sources (, ). Data were also analyzed in source space using BESA 6.0.

    For each participant, we first averaged the sensor data across the speech and nonspeech conditions. Two dipoles were initially placed in bilateral Heschl's gyrus (according to the template brain) and then fitted freely (location and orientation), subject to the constraint that their locations remained symmetrical. For most participants, dipoles were fitted and optimized to the 80–110 ms window, corresponding to each child's M50/M100 response. However, in some cases, it was necessary to extend the time window down to 70 ms or up to 160 ms to more accurately account for latency delays in younger children or those with maturing waveforms (cf. Separate speech and nonspeech source waveforms were then extracted from the left and right hemisphere dipoles. Results and Discussion Figure shows a timeline of GM's magnetic flux map for speech and nonspeech responses. Note that compared to the age-matched typically developing child in Figure, GM's response to nonspeech was much earlier and larger than her response to speech.

    Figures, show each participant's sensor waveforms to speech and nonspeech sounds. Again, there was a discrepancy between GM's double-peaked response to nonspeech stimuli and her virtually flat response to speech. In contrast, the other participants showed similar responses to speech and nonspeech stimuli. Note however, that the participants differed widely in both the morphology of the waveforms and their overall magnitude. While this may partly reflect differences in brain activity, it may also depend on the child's position in the MEG helmet and the size of their heads. To quantify the similarity between each participant's speech response and their own nonspeech responses, we used intra-class correlations (ICCs), which were Fisher-z transformed to improve linearity for parametric statistics (cf., ). Initially, we included the whole epoch in the ICC calculations (0–500 ms).

    In addition, we also considered a narrower 65–165 ms window, which incorporated the obligatory M50 and M100 responses (see ). We compared GM's ICCs to those of children in the TD and ASD comparison groups using SingLims, a statistical program widely used in neuropsychological case studies. The SingLims approach assumes the comparison participants to be a representative sample of the population, and uses modified t-tests to estimate the “abnormality or rarity” of a case's scores and the percentile ranking of the case (i.e., the percentage of the control population exhibiting a lower score than the case). Tables, show the SingLims test results, and point and interval estimates of effect size and abnormality for GM's scores, compared to the TD and ASD comparison groups respectively. GM's ICCs were significantly lower than both control groups for both the 65–165 ms and 0–500 ms time periods, in each case placing her in the bottom 5% of the population.

    Figure shows the results of the source analysis for GM. It suggests that the striking differences between GM's speech and nonspeech sensor waveforms originate from the left hemisphere.

    As for the sensor analysis, we calculated Fisher z-transformed ICCs to index the similarity of each child's nonspeech and speech dipole waveforms, for left and right hemisphere sources. As the dipoles used for source extraction were oriented to the M50/M100 response, we only report ICCs for the corresponding 65–165 ms window. SingLims analyses (Tables, ) show that GM had significantly reduced ICCs for the left hemisphere, again placing her in the bottom 5% of the population relative to both control groups. In contrast, her right hemisphere responses were within the normal range. To summarize, GM's MEG recordings were highly atypical. In particular, she showed a striking dissociation between her M50/M100 responses to speech and to nonspeech sounds. This appeared to originate in her left auditory cortex and was not shown by any of the typically developing or autistic children we tested.

    It is important to consider the possibility that GM's atypical recordings may be artefactual. Of particular concern is the possibility that GM may have moved more than other participants during the recording session. The KIT MEG system does not currently incorporate online motion tracking. However, during MEG testing, all participants, including GM, were monitored carefully for head-motion, with strict data acquisition and exclusionary criteria applied for motion (see MEG Recording). Moreover, they were lying in a supine position that helps support the head and reduce unwanted movements during recording. Finally, and perhaps most importantly, it is highly unlikely that excessive motion could have given rise to the specific pattern of responses we have reported.

    We would not expect motion to affect responses to speech and nonspeech differentially or to result in exaggerated hemispheric asymmetries. Nor would we expect artifacts to result in a clearer response to nonspeech than that found in controls. Experiment 2 Two years after the initial recording session, we had the opportunity to re-test GM as part of a second ongoing study, the aim of which was to validate a lightweight “wireless gaming” EEG system as a research tool for use with typically developing children. If the findings from Experiment 1 were a genuine reflection of atypical brain responses, we expected to find similar atypicalities in GM's EEG recordings.

    Replicating our findings from Experiment 1 would also provide preliminary evidence for the suitability of the gaming EEG system for the assessment of minimally verbal children with ASD. Participants At the time of testing for Experiment 2, GM was 10 years and 10 months old. Her auditory brain responses to nonspeech sounds were compared to those of 21 TD children (11 females, 10 males) aged between 6 and 12 years, tested using the same procedures as part of a validation study for the EEG system. The mean age of TD participants was 9.23 years ( SD = 1.78). Participants had normal hearing and vision, and no history of developmental disorders or epilepsy.

    Stimuli Stimuli were standard tones ( n = 566, 175-ms 1000-Hz pure tones with a 10-ms rise and fall time; 85% of trials) and deviant tones ( n = 100, 175-ms 1200-Hz pure tones with a 10-ms rise and fall time; 15% of trials), separated by a jittered SOA of 900–1100 ms. Tones were presented binaurally at a comfortable listening volume through speakers. Participants in the TD group heard 666 tones in a single block. Due to concerns about potential movement artifacts, GM was presented with a second block of 666 trials after a short break. EEG Recording and Analysis Participants were seated in a comfortable chair and watched a silent video whilst ignoring the tones.

    Auditory brain responses were measured using an Emotiv EPOC gaming EEG system that has previously been validated against a research-grade Neuroscan EEG system. The sensors in the headset were adjusted on the head until suitable connectivity was achieved as indicated by the TestBench software, which adds a small modulation to the feedforward signal, and measures the size of the signal back from each channel. The testing procedure took 10–15 min. The Emotiv EEG system uses gold-plated contact-sensors fixed to flexible plastic arms of a wireless headset. The headset included 16 sites, aligned with the 10–20 system: AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4, M1, and M2. One mastoid (M1) sensor acted as a ground reference point to which the voltage of all other sensors were compared. The other mastoid (M2) was a feed-forward reference that reduced external electrical interference.

    The signals from the other 14 scalp sites (channels) were high-pass filtered with a 0.16 Hz cut-off, pre-amplified and low-pass filtered at an 83 Hz cut-off. The analog signals were then digitized at 2048 Hz.

    The digitized signal was filtered using a 5th-order sine notch filter (50–60 Hz) and low-pass filtered and down-sampled to 128 Hz. The effective bandwidth was 0.16–43 Hz. The Emotiv EEG system was modified to send markers to the EEG to indicate the onset of each stimulus. This was achieved using a custom-made transmitter that converted the onset and offset of each tone into a positive and negative electrical signal. These signals were transmitted into the O1 and O2 channels using an infrared triggering system. The positive and negative spikes in the O1 and O2 EEGs were processed offline in MATLAB. A between-channels difference greater than 50 mV was coded as a stimulus onset or offset.

    The event marker had at a constant time interval (20 ms delay of the transmitter module) prior to the point of positive and negative signal cross-over. Stimulus markers were recombined with the EEG data. The resultant EEG was processed offline using EEGLAB version 11.0.5.4b. The EEG in each channel was bandpass filtered from 0.1 to 30 Hz, and then divided into epochs that started 102 ms before the onset of each stimulus and ended 500 ms after the onset of the same stimulus. Each epoch was baseline corrected from -102 to 0 ms. Epochs with absolute values greater than 150 uV were rejected. To maximize the amount of useful data, we collapsed across tone types (standard and deviant).

    For GM, this resulted in a total of 220 accepted epochs across her two blocks of recordings. For control participants, there were many more acceptable trials (mean = 617, SD = 42 for a single block), but in order to equate GM and the controls for data quality, for each TD participant, we randomly sampled 220 trials (including standards and deviants). For each participant, we then averaged the 220 epochs to create an auditory ERP. Results and Discussion Figure shows GM's responses recorded from the two electrodes, F3 (left frontal) and F4 (right frontal), that produced the clearest response in the TD control participants. Consistent with her atypically large MEG response to nonspeech stimuli in Experiment 1, GM showed a strikingly strong and early response to the tone stimuli, particularly for the left frontal electrode. This was clearly outside the range of any of the TD control participants. Thus, GM's unusually large brain response to nonspeech stimuli appears to be a stable and replicable characteristic of her cortical response to a range of nonspeech stimuli.

    General Discussion Minimally verbal individuals represent a significant proportion of the autistic population and yet are typically excluded from research on cognition and brain function. In the current study, we used MEG and EEG to measure the brain responses to auditory stimuli of a minimally verbal child with ASD. The initial MEG study in Experiment 1 revealed a striking dissociation between her auditory sensory encoding of speech and nonspeech sounds. Specifically, GM had relatively strong and early responses to nonspeech, but unusually weak responses to speech sounds. MEG source analysis suggested that these differences arose in her left hemisphere. We were able to demonstrate statistically that this discrepancy between speech and nonspeech stimuli was highly unusual. Whether compared to typically developing children or other verbal children with ASD, GM's response similarity for speech and nonspeech fell into the bottom 5% of the population.

    In Experiment 2, we replicated the finding that GM shows unusually strong response to nonspeech stimuli. This was observed despite the fact she was tested 2 years after Experiment 1 using a different neurophysiological technique (EEG rather than MEG), in a different environment, using different stimuli (pure tones rather than complex tones), as well as a different control sample.

    This successful replication indicates that GM's atypical responses to nonspeech sounds are genuine and not merely a consequence of methodological artifacts. GM's atypical responses to nonspeech sounds in both experiments might be considered a neural correlate of atypical auditory processing that is widely reported amongst individuals with ASD (; ).

    Autobiographical accounts of individuals with ASD often include descriptions of atypical sensory experiences, particularly in relation to sounds (;;; ). These accounts are supported by parental reports, clinical observations, and enhanced performance on certain psychoacoustic tests (,;;; ). Surprisingly, then, GM appears to show little evidence of hyper-responsiveness to auditory stimuli in everyday life, as documented by her mother's responses on the Short Sensory Profile. Given that GM is nonverbal, we were unable to obtain a self-report of her sensory experiences. Thus, it remains an open question what the subjective experience of her atypical cortical responses might be. Clearly, the other intriguing aspect of GM's data is her attenuated response to speech stimuli in the MEG experiment.

    One interpretation is that GM's brain “switches off” to speech stimuli. This would be consistent with the theories of social deficit or an impairment in social motivation and cognition in ASD (,;; ) and with previous ERP studies suggesting that children with ASD show a difference in the attentional orienting to speech and nonspeech sounds, particularly when they are not explicitly required to attend to the sounds (;,; ). However, previous studies have focused on the later mismatch negativity and P3 components of the auditory ERP, whereas the striking differences between speech and nonspeech in GM's brain responses were apparent much earlier in the waveform, during the “obligatory” M50/M100 components. This suggests that GM's differential response to speech and nonspeech sounds reflects a bottom-up mechanism in her brain's sensitivity to the acoustic differences between the two stimuli. The major difference between the speech and nonspeech stimuli is the presence of the fundamental frequency (F0) in the speech stimuli. This serves to give a sound its “speechness” and provides pitch cues for conveying linguistic and emotional prosody as well as information about speaker identity (see; for review). Perhaps most importantly, the fundamental frequency also provides a vital cue for segregating speech from background noise in natural listening environments (e.g., ).

    Thus, a neural impairment affecting the processing of the fundamental frequency might be expected to have profound implications for the development of speech perception. It is important to note that GM also has a diagnosis of cerebral palsy, which sets her apart from other minimally-verbal autistic children. The nature of the relationship between ASD and cerebral palsy is unclear and difficult to tease apart. Although the incidence of ASD is considerably higher amongst individuals with cerebral palsy (approximately 6%; ) than it is in the general population, the majority of individuals with cerebral palsy do not meet ASD criteria. Likewise, speech and language abilities are affected in the majority of individuals with cerebral palsy, but the complete absence of speech is relatively rare. Concluding Remarks The current case report represents a starting point for investigating the potential causes of severe language impairment that affect many individuals on the autism spectrum.

    However, GM is obviously an unusual case and, at this stage, it is unclear whether or not her atypical brain responses might generalize either to other minimally verbal children with ASD or to those with cerebral palsy. Nonetheless, the current study stands as an important proof of concept, demonstrating that it is possible in practice to measure brain responses to different auditory stimuli, using both MEG and EEG, from minimally verbal children with ASD. Future studies can take advantage of the complementary strengths of these two techniques and begin to answer vital questions pertaining to cognition and brain function within this much-neglected subgroup of the ASD population. Author Contributions SY and JB contributed to the conception, design, acquisition of data, analysis, interpretation, drafting and revising the manuscript. GM contributed to data interpretation, drafting and revising the manuscript. NB contributed to the conception, design, acquisition of EEG data, EEG analysis, interpretation, and drafting the manuscript.

    All authors read and approved the final manuscript and have given final approval of the version to be published. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

    Acknowledgments This study was funded by the Macquarie University Research Excellence Scholarship (MQRES), Australian Research Council grants (DP098466, ARC 1236500) and an ARC Centre of Excellence Grant (CE110001021). We would also like to thank Dr Ivan Yuen for helping to create the deviant vowel sounds used in Experiment 1. Lastly, we are very grateful to GM and her family, and all the children and families who took part in this study. Reviewed by:, Beth Israel Deaconess Medical Center, USA, American University of Beirut, Lebanon Copyright © 2015 Yau, McArthur, Badcock and Brock. This is an open-access article distributed under the terms of the.

    Manual High School Indianapolis

    Meter

    The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.Correspondence: Shu H. Yau, Aston Brain Centre, Aston University, Aston Triangle, Birmingham B4 7ET, UK, s.yau@aston.ac.uk.

    2.1 BCI / EEG The brain consists of billions of neurons. Communication between neurons is manifested in electrical signals. Electroencephalography (EEG) measures this electrical activity along the scalp. This is a well established technique from the 1920s. EEG has a low spatial but a high temporal resolution which makes it ideal for recording changes in brain activities in response to events.

    A BCI takes brain activity - for example in the form of an EEG signal (which is the case for all BCIs mentioned in this thesis) - as input. For example, Figure shows an EEG signal with visible alpha waves. Figure 2.1: EEG signal with alpha waves marked with black boxes.

    Brain activity is not the only source of electrical signals along the scalp. Muscle activation also relies on electrical signals, the measuring of which is called electromyography (EMG). Eye movement furthermore discharges electricity due to the eyes dipole properties - measuring this signal is called electrooculography (EOG).

    In the context of measuring EEG, EMG and EOG are typical noise artifacts. EEG is measured either with stand-alone electrodes or electrodes attached to a cap. Examples of high fidelity EEG BCIs are the gamma-2-cap from g.tec and Easy Cap from EASYCAP.

    Some caps can even have up to 256 electrodes mounted. Caps need mounting preparations with gel to improve the conductivity between scalp and electrodes. We have tried out the gamma-2-cap during a BCI seminar in Aalborg 2013 (Figure ).

    A hi-fi EEG based BCI cost around 10-15.000 Euro and includes, for example, a cap with electrodes, cables and an amplifier. The EEG data is usually processed and analyzed off line with tools like EEG-lab or FieldTrip which are open-source plug-ins to MATLAB. 2.1.1 Classic BCI applications and techniques There are different approaches to processing a raw EEG signal. Some of these and their typical applications are briefly covered below. Evoked Potentials correlates visual/auditory stimuli with EEG responses.

    When an event of significance is perceived, the brain fires certain action responses. One widely used response is the P300 which manifests itself as a peak in the EEG signal 300 ms after a stimulus - for example a flash of an image. In the intendiX P300 speller application from g.tec different letters are flashed for user while the P300 action response is used to determine which letter the user wants to select. Motor imagery is another classic approach to BCI in which an imagined movement of a body part causes motor cortex activity which is detected by the BCI. In this way imagined movement can be used for example to control wheel chairs and other vehicles.

    This technique has been used in gaming as well, e.g. For trigger activation.

    Motor imagery requires spatialization (localization) of brain activity especially within motor cortex. This presents a challenge for EEG based BCIs since they have a low spatial resolution. Classification of EEG signals are widely used within BCI applications. For example, it has been used for unique identification of a person for authentication purpose. Various research groups uses classifications to predict epilepsy attacks. Others do pattern matching on walking motions for assisting in rehabilitation after strokes.

    This approach has also been used to classify: ( i) emotions like joy and anger; and ( ii) human expressions like a happy facial expression or a mental mood. (b) Single channel EEG after Fast Fourier transform (FFT) has been applied. Figure 2.3: Raw EEG and Fast Fourier transformed EEG.

    Frequency analysis is another technique for processing EEG data. Neurons are organized in networks and communication among them is always ongoing in oscillatory patterns. Frequency analysis estimates the power of each frequency component. One common approach to frequency analysis is to apply Fast Fourier transform (FFT) - a simple example is plotted in Figure.

    When applying FFT we go from a time domain into a frequency domain as can be seen on the x-axis values of the plots before and after FFT. Frequency analysis is often used in conjunction with other methods of analysis - for example to extract features for classification.

    It is also used stand-alone either for neurofeedback or in research aiming to correlate certain frequency patterns with some condition or cognitive task. This is very typical within EEG research exemplified by a study showing that '.

    High resting theta power in healthy older adults is associated with better cognitive function'. Frequency analysis is interesting due to the correlation between frequencies and mental states.

    A rough overview is lined up in Table. Brainwave Type Frequency range Mental states and conditions Delta 0.5Hz to 3.5Hz Deep sleep Theta 3.5Hz to 8Hz Falling asleep Alpha 8Hz to 12Hz Relaxed awake state (dominant with eyes closed) Beta 12Hz to 30Hz Mental activity, attention, concentration Midrange Beta 16Hz to 20Hz Thinking, aware of self & surroundings High Beta 21Hz to 30Hz Alertness, agitation Gamma 30Hz to 100Hz Reflects the mechanism of consciousness Figure 2.4: Generalized frequency bands The hi-fi BCIs are getting mobile. This trend is exemplified by a mobile version of the Easy Cap and helmets with built in EEG sensors for soldiers.

    Another branch of BCIs that have come far in getting mobile are the consumer BCIs as described in the next section. 2.2 Consumer BCIs Within recent years consumer BCIs have emerged and moved BCIs outside the laboratories. An early consumer BCI was the Neural Impulse Actuator (NIA) released in 2008 featuring a three forehead sensor configuration and connectivity through a desktop box with cables. NIA was intended primarily for gaming and cost around 100 USD (it is not in production any longer).

    Consumer headsets today typically offer additional sensors (accelerometer, gyroscope, etc) and wireless connectivity. An overview of current state consumer BCIs is presented in Table.

    2.2.1 Emotiv EPOC EPOC has out of the box a desktop SDK and a set of tools aimed at gaming. The closed source SDK provides detection of emotions, expressions, cognitive states and more (Figure ). There is also a research edition of the EPOC headset which can record raw EEG. It comes with the TestBench desktop application for recording and viewing the raw EEG data. TestBench can process EEG into various frequency bands and the raw EEG data can be exported in an EDF-format (multichannel biological and physical signals) (Appendix ).

    Before using the EPOC headset, the user has to moist each of its 14 electrode in a saline solution and then attach each electrode to the headset. This preparation took us about 10-15 minutes when some routine was achieved. The EPOC has no SDK for mobile devices, but has been hacked for mobile usage in conjunction with the USB dongle - we return to this in Section. EPOC seems to be the consumer BCI that appears most frequently in research papers. An overview of its application within research is given below.

    The FlyingBuddy2 uses motor imagery to make it possible for a disabled person to steer a Drone with the future perspective of steering a wheel chair. A group in Spain uses the out of the box SDK classified facial expression (based on EMG) such as open and close clinch combined with EOG data to steer a tractor. Again using SDK classifications, an emotion based chat application has been build featuring avatars that changes their expression from angry to happy based on the emotional state of a person. In a recent M.Sc. Thesis the EPOC was used for a brain wave biometrics authentication system.

    The NeuroPhone project used a P300 approach to make phone calls on a smart phone - however the EEG processing was performed on a laptop. Finally, EPOC has also been used in a human-robot interaction study where they used EEG to classify human satisfaction of the interaction with a robot. EPOC has also been used by media researchers at the Danish Broadcasting Corporation (DR) as a supplemental tool to qualitative interviews and questionnaires. Throughout the video screening of a TV Drama production, the screening participants' brain states were measured in terms of EPOC SDK values such as excitement, frustration and attention. In an interview with Harddisken (a DR radio program about technology), Jacob Lyng Wieland - in charge of the experimental usage of BCI during video screenings - reported that they had skipped using the EPOC headset because it was too cumbersome to use. 2.2.2 NeuroSky MindWave Mobile NeuroSky MindWave Mobile (MindWave) offers desktop and mobile SDKs (IOS and Android). The closed source SDK features frequencies processing and analysis outputting values for the level of 'attention' and 'meditation' (Figure ).

    The SDK also provides information about eye blinks and a number of frequency bands which we previously have lined out in Figure. The MindWave SDK outputs the following frequency bands: delta (0.5 - 2.75Hz), theta (3.5 - 6.75Hz), low-alpha (7.5 - 9.25Hz), high-alpha (10 - 11.75Hz), low-beta (13 - 16.75Hz), high-beta (18 - 29.75Hz), low-gamma (31 - 39.75Hz) and mid-gamma (41 - 49.75Hz). The headset is easy to use and the SDK includes a simple Bluetooth API that seamlessly supports device connectivity. Due to its connectivity options and SDK signal processing, the MindWave Mobile requires little effort to embed in a prototype. This has been done, for example, in a recent paper by Marchesi. He uses MindWave in the BRAVO project to detect attention among school children in an e-learning setting. If a child's attention level is under some threshold, it its reported to the other children who are encouraged to offer their help.

    In another study, a research group uses MindWave to measure attention during an online game. They specifically look at the attention levels provided by the SDK versus self reported attention levels among a group of participants. They conclude that the self reports and the SDK values are correlated. Another paper uses the attention and meditation SDK values to examine the stress levels among participants while performing various tasks. It concluded that the MindSet was able to measure an increase in stress induced by the tasks performed (Stroop test, Tower of Hanoi). In the brain state - defined in terms of EEG frequency composition - of a test subject driving a car is measured by a predecessor to the MindWave.

    Raw EEG data is recorded to a mobile phone via bluetooth and its frequency composition is analyzed offline. Interestingly, the results show a change in the brain wave frequency pattern when the driver performed, for example, a phone call. Finally, in a M.Sc. Thesis, classification of the raw EEG data from the MindSet is used to control a snake-like game aimed at children. MindWave comes out of the box with a mobile application named Brainwave Visualizer which let its user inspect the current levels of the 8 frequency bands supported by the SDK. The app also provides simple neurofeedback by letting its user control the flying height of a ball by the SDK 'meditation' value or the intensity of a flame by the SDK 'attention' value. The same approach to neurofeedback is used in the third party app Transcend by Personal Neuro.

    During meditation the user can get a flower to grow by increasing the 'meditation' SDK value. 2.2.3 TrueSense Kit TrueSense Kit is the newest, cheapest, most portable headset. It comes with OPI Console, an open source desktop application for recording and viewing raw EEG data and analyzing sleep, meditation etc.

    From recorded EEG. It also enables exporting data as EDF-files for further processing and analysis in other applications. The OPI Console also offers sleep analysis and yoga performance analysis (Figure, Appendix ). The TrueSense Kit sensor(s) can be placed on various parts of the body for measuring blood flow, heart rate, body temperature and body movements.

    TrueSense Kit records either directly to an internal memory module or transmit data over ZigBee radio to the OPI Console through a USB receiver. The sensors can be combined in a multi sensor configuration attached various places on the head or body.

    TrueSense Kit provides no immediate mobile device connectivity but the OPI Console application can likely be ported to Android since it is build with the QT framework. Another approach would be to build a native C Android module from the TrueSense Kit C SDK. Since few Android devices currently support ZigBee out of the box, this would require an external receiver board. TrueSense Kit is not yet covered in any papers despite its support for flexible experimental setups. TrueSense Kit was warmly received by the quantified self community at the yearly QS conference in Amsterdam 2013. 2.2.4 Future headsets New consumer BCIs are about to arrive, for example Muse (as briefly mentioned in the Introduction Section ) and Emotiv INSIGHT. These new headsets have some characteristics in common which seem to be representative for the new generation of consumer BCIs:.

    they support Bluetooth. they use dry electrodes. they are discrete and comfortable to wear An interesting fact is that both of these headbands are crowdfunded. Muse raised 287,472 USD in 2012 from an unknown amount of supporters at Indiegogo. EPOC Insight had pledged 1,643,117 USD by the end of September 2013 from nearly 5000 people on Kickstarter.

    This trends an interest in low cost consumer BCIs and exemplifies pretotyping by presenting and selling the product before it has actually been build. Most importantly, these new BCIs strengthen the possibility for neurofeedback among consumers in their daily settings. In the next section we focus on the neurofeedback concept. 2.3 Neurofeedback When given real time feedback on its oscillations, the brain can learn to control and change them. This is interesting since the brains oscillations are significantly correlated with brain functions and behavior as well as with psychiatric diseases.

    Neurofeedback training exploits this mechanism by providing feedback based on oscillation frequencies correlated with some desirable function or behavior. The neurofeedback mechanism was discovered and developed in the 1960s, but the first controlled studies providing clinical evidence supporting neurofeedback training effects were published in the 1980s. Since then, the efficacy of neurofeedback therapy has been documented in several studies and neurofeedback is listed among the treatments with highest evidence support for certain conditions according to The American Academy of Pediatrics (AAP). Neurofeedback is routinely used in treatment of a number of conditions including Attention Deficit Hyperactivity Disorder (ADHD), anxiety, epilepsy, and addictive disorders. Besides its clinical usage, a number of studies show that neurofeedback training can increase cognitive performance. For example, it has been shown to increase semantic working memory, focused attention, perceptual sensitivity and reaction time.

    Neurofeedback training has also been shown effective by real-life behavioral measures - e.g. By increasing musical performance in a stressful context among conservatory students in a study designed to ensure ecological validity. 2.3.2 Stress and alpha feedback training Alpha feedback training is the subset of neurofeedback training for which the goal state of the feedback is defined in terms of the amount of alpha waves - thereby seeking to increase the alpha activity.

    Alpha activity is associated with a relaxed consciousness. Together with theta, alpha is the EEG frequency band in which effects of meditation are most significant. Alpha 'blocking' (i.e., reduction) is associated with alertness. Thus, by increasing alpha levels, alpha feedback training has been shown - amongst other positive effects such as increased cognitive performance - to reduce stress and anxiety. With a classification approach, EEG has been used to classify subjects from either a chronically stressed or a control group with a success rate higher than 90%. This testifies to the manifestation of stress in EEG data. The dominant frequency within the alpha band - the alpha peak - and the amplitude of the alpha band varies between individuals.

    An alpha feedback training system can account for this by calibrating according to the individual alpha peak and the baseline amount of alpha. This is, for example, the approach taken in the alpha feedback system presented in. The importance of giving feedback on individually determined frequency bands is investigated in and concludes that ' Neurofeedback training applied in individual EEG frequency ranges was much more efficient than neurofeedback training of standard EEG frequency ranges'.

    2.4 Related works Having explained current state of consumer EEG BCIs, the neurofeedback mechanism and how alpha feedback training can help to reduce stress, we now present existing systems and research within the consumer neurofeedback domain. There is only a limited number of such systems and research for reasons already mentioned above:. Neurofeedback therapy is expanding but not widely adopted yet.

    Manual High School New Mexico

    Consumer BCIs have only emerged within recent years. They are still maturing and not widely adopted yet. This section present and discuss the commercially available systems Brainball and BioZen and the research project SmartphoneBrainScanner2. 2.4.1 Brainball According to the researchers behind, Brainball '. Dwells in the realm between art and research, entertainment and science, method and object'.

    They present a game with a tangible user interface in which two opponents sit on a chair separated by a table. A steel ball lies between them.

    The players wear specialized BCIs mounted on their foreheads and somewhat similar to the NIA system (see Section ). The EEG signal is analyzed into its frequency components and the ball will roll away from the most relaxed player (drawing on the correlation between alpha activity and relaxation). While playing, the players are able to see a screen visualizing their EEG activity. The creators of Brainball interestingly reports that playing the game leads to increased relaxation by measure of both Galvanic Skin Response (GSR) and self reports.

    Brainball experienced a lot of attention including honorary mention at Ars Electronica 2000 and 100+ appearances on TV. However, it remains a niche product within gaming and entertainment due to the dependency on specialized hardware (BCI, ball, screen, etc). The Brainball BCI system is commercially available through a Swedish company under the name Mindball (Figure ).

    2.4.2 BioZen BioZen is a consumer biofeedback system developed by the National Center for Telehealth and Technology (T2) under the US Department of Defense. The fact alone that this organization is behind a biofeedback system witnesses to the increasing adoptation of the biofeedback (here under neurofeedback) method. The system consists of an Android application in conjunction with one or more consumer bio-sensors. Several sensors are supported including heart rate, skin temperature, GSR and EEG sensors.

    For EEG measurements, the Neurosky MindWave (and some older Neurosky BCIs) are supported. The BioZen app uses processed data delivered from the Neurosky SDK (Delta, Theta, Low Alpha, High Alpha, Low Beta, High Beta, Low Gamma, Mid Gamma, (e)Attention, (e)Meditation) and these values can form the basis for neurofeedback training. Relying on the Neurosky SDK for frequency analysis, BioZen is bound to the limited set of frequency spectra mentioned above. On the BioZen web page, T2 claims that ' BioZen is the first portable, low-cost method for clinicians and patients to use biofeedback in and out of the clinic' and that ' BioZen takes many of the large medical sensors in a clinic and puts them in the hands of anyone with a smart phone'.

    In other words, it is promoted for clinical usage - a claim the authors of this thesis are very cautious about making on behalf of AlphaTrainer (see Section ). (c) Save feedback session with a tag (meditation, breathing, entertainment or working) and a note. Figure 2.8: Screenshots of the neurofeedback from the BioZen Android app using the MindWave BCI (Images courtesy of BioZen). The feedback consists of an image of a hill in which the background brightness and the visibility of a foreground tree are the feedback variables. The background is brighter when the chosen parameter (e.g. Some EEG power band) is higher while the foreground tree is more visible when the chosen parameter is more stable (see Figure ). 2.4.3 Smartphone Brain Scanner 2 The Smartphone Brain Scanner is developed at the Technical University of Denmark (DTU).

    Ips Manual High School

    The project aims at moving EEG research out of the laboratory by means of low cost wireless BCIs and smart phone based real-time neuroimaging software which ' may transform neuroscience experimental paradigms'. The important notion here is that while leaving the laboratory and using consumer interfaces, the focus is still on research. They are in principle headset agnostic and support the Easy Cap and EPOC (described in Section ) BCIs.

    The EPOC is both used in its standard configuration and in a modified configuration in which it is merged with hi-fi gel based electrodes. On the software side, they use their Smartphone Brain Scanner (SBS2) open source software framework including state of the art EEG signal processing such as source reconstruction, noise filtering and frequency analysis. It is build on the QT C framework which allows compilation to the major desktop and mobile operating systems. However, it is not trivial to embed a QT module inside another native applications e.g. On the Android platform. Wireless connection to the EPOC BCI goes via an USB dongle and requires an Android phone to be rooted to function, the platform does not provide an easy way of interfacing with BCIs via bluetooth.

    Furthermore it requires the research edition of the EPOC. To validate the design of the Smartphone Brain Scanner, the research team behind has build 3 brain imaging applications including an alpha training application. Again, the focus is on research - specifically, the interface parameters of neurofeedback training are investigated. The efficacy of two different feedbacks were compared in a controlled study by measuring increase in alpha amplitudes with each interface during a week of intensive training.

    One feedback show a square changing colors between blue over gray to red for which red represents high alpha. In another interface, high alpha amplitudes manifests itself in the creation of small boxes and the color of the boxes created. By keeping the created boxes visible during the 5 minute training period, the interface reveals performance history thus ' allowing the user to easily compare methods for increasing the amplitudes' (Figure ). The training effect measured by comparing baselines revealed only a statistic significant increase in alpha for the box changing colors while the alpha levels during training were significantly higher using the square creation interface. The conclusions especially relevant to this thesis are that:. Alpha feedback training is feasible in a mobile setup. Alpha levels during training (effect of feedback) is not necessarily correlated with a general increase in alpha levels (effect of training).

    2.5 Sum up of background and related systems Table lists neurofeedback systems which are either commercially available or use a consumer BCI. These systems have been chosen because they outline the current state of systems in the domain of AlphaTrainer. The parameters highlighted in the table express parameters desirable or necessary for a consumer neurofeedback system. No existing system includes all parameters. For example, no consumer available system includes the ability to give feedback on individually adapted frequency bands which is important for the effectiveness of the feedback training (see Section ). This 'gap' among current neurofeedback systems is our motivation for designing and building AlphaTrainer as discussed in the following chapters. Parameters Brainwave Visualizer & Transcend Brianball BioZen Alpha feedback app - Smartphone Brain Scanner Convenient (dry sensor + bluetooth connectivity).

    Yes no yes no Headset agnostic. No no yes (yes) Individual feedback. Spectra no n/a no yes Efficacy documented. No no no yes Available to customers. Yes yes yes no Low cost. Yes no yes n/a Figure 2.10: Related systems Footnotes Android app: The experiment used a MindSet headset (the generation before MindWave but with the same chipset and electrode).

    14 (, ) a subset of biofeedback which is the term mentioned in the The American Academy of Pediatrics (AAC) recommendations. Android App: 23 (, ).

Designed by Tistory.