Wednesday, May 16, 2018

Improving College Students’ and Others’ Mental Health with Conversational Agents

Improving College Students’ and Others’ Mental Health with Conversational Agents

Mary Harrsch
Networks and Management Information Systems (Retired)
University of Oregon College of Education

This is a cross-post from the Information Age Education newsletter

Mental illness is common in the United States. About one in four adults suffers from some form of mental illness in a given year (Holmes, 1/14/2015).

This level of occurrence is even higher for college students—perhaps as high as one in two according to the article, Delivering Cognitive Behavior Therapy to Young Adults with Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial (Fitzpatrick, et al., April-June, 2017.) In a related article, Depression and College Students, Michael Kerr points out that financial worries due to high debt and poor employment prospects coupled with failed relationships, lack of sleep, poor eating habits, and not enough exercise frequently result in the development of depression (Kerr, 3/29/2012). There are also many life transitions and challenges to a student’s identity. Quoting from Margarita Tartakovsky’s article, Depression and Anxiety Among College Students (Tartakovsky, 7/17/2016):

…college calls for a significant transition, where “students experience many firsts, including new lifestyle, friends, roommates, exposure to new cultures and alternate ways of thinking,” observes Hilary Silver, M.S.W., a licensed clinical social worker and mental health expert for Campus Calm.
Adjusting to college also influences identity — a phenomenon Silver has termed Identity Disorientation. “When students head off to college, the familiar people are no longer there to reinforce the identity these students have created for themselves.” This can make students “disoriented and feel a loss of their sense of self,” contributing to symptoms of depression and anxiety.

Many of these college students do not seek mental health services. Referring again to the Fitzpatrick, et al., article (April-June, 2017):

…up to 75% of the college students that need them do not access clinical services. While the reasons for this are varied, the ubiquity of free or inexpensive mental health services on campuses suggests that service availability and cost are not primary barriers to care. Like non-college populations, stigma is considered the primary barrier to accessing psychological health services.

As described in this article, in their effort to overcome this fear of stigma Stanford researchers developed a virtual therapist, or conversational agent (often called a chatbot). The chatbot employs artificial intelligence and natural language processing to deliver cognitive behavior therapy (CBT) to college students self-identified as suffering from significant depression and/or anxiety.

Stanford's virtual therapist is named Woebot. Like many chatbots, Woebot uses Natural Language Programming to process student responses to questions posed by the virtual therapist, then guides the conversation to an appropriate node of a decision tree to provide suggested actions.

The Original Chatbot

Chatbot software was originally based on the "Eliza" virtual therapist that was developed back in the early 60s by Professor Joseph Weizenbaum at the Massachusetts Institute of Technology (Markoff, 3/23/2008). I studied "Eliza" in the late 90s and used it as a model for a virtual professor I developed when I worked at the University of Oregon. I was so excited to see that someone had finally recognized the potential of artificial intelligence to help people cope with life's challenges!

Dr. Weizenbaum's “Eliza” virtual therapist was initially designed to simply keep a conversation going between his chatbot and a human to see if the human could figure out they were talking to a computer and not a real person. However, Stanford's Woebot chatbot uses the scientific principles of cognitive behavior therapy to encourage its human "friends" to develop a positive mindset and overcome depression. Quoting again from the Woebot clinical trials report by Fitzpatrick, et al. (April-June, 2017):
  • "Psychoeducational content was adapted from self-help for CBT. Aside from CBT content, the bot was created to include the following therapeutic process-oriented features:
  • Empathic responses: The bot replied in an empathic way appropriate to the participants’ inputted mood. For example, in response to endorsed loneliness, it replied “I’m so sorry you’re feeling lonely. I guess we all feel a little lonely sometimes” or it showed excitement, “Yay, always good to hear that!”
  • Tailoring: Specific content is sent to individuals depending on mood state. For example, a participant indicating that they feel anxious is offered in-vivo assistance with the anxious event.
  • Goal setting: The conversational agent asked participants if they had a personal goal that they hoped to achieve over the 2-week period.
To engage the individual in daily monitoring, the bot sent one personalized message every day or every other day to initiate a conversation (ie, prompting). In addition, “emojis” and animated gifs with messages that provide positive reinforcement were used to encourage effort and completion of tasks.

A Chat with Woebot

Woebot is now freely available online (Woebot, n.d.). On the Woebot website, you can click on a link that connects you and Woebot to a private Facebook Messenger session that no one else can see. Then Woebot talks with you about how you are feeling and how you can keep a positive frame of mind using techniques from cognitive behavioral therapy. I've had talks with Woebot about those pesky "should" statements, discussions about self-defeating "all-or-nothing" viewpoints, the futility of trying to predict other people's reactions, and the importance of self-compassion. Sometimes the little bot even provides interesting short videos about behavioral research.

One that I found particularly interesting was Carol Dweck’s video about the problem of students who develop a fixed mindset when they are praised as "you're so smart" from a young age. I strongly recommend this excellent 10-minute video (Dweck, December, 2014).

After your initial session, Woebot then contacts you each day through Facebook Messenger and engages in a short friendly conversation. This can teach you how to identify your strengths, to mentally rework your own internal dialogue to develop a healthier opinion of yourself, and to recognize negative approaches in your relationships with others. If you wish to talk to Woebot about a specific problem, you can also initiate a conversation like you would with any of your friends on Facebook Messenger. Woebot is also available as a free smartphone app in the Apple or Google Play Stores.

Using Gamification to Combat Poor Adherence

In their article cited earlier, Fitzpatrick, et al., note that other psychologists have been experimenting with computerized CBT, but that motivating patients to continue interaction with computerized CBT tools has been challenging:

In recent years, there has been an explosion of interest and development of such services to either supplement existing mental health treatments or expand limited access to quality mental health services. This development is matched by great patient demand with about 70% showing interest in using mobile apps to self-monitor and self-manage their mental health. Internet interventions for anxiety and depression have empirical support with outcomes comparable to therapist-delivered cognitive behavioral therapy (CBT). Yet, despite demonstrated efficacy, they are characterized by relatively poor adoption and adherence.

To address these problems of adherence, Woebot's team of researchers adopted the "daily dose" model, since online learning studies have shown small doses of learning embedded in every day learning appears to be more effective than one lecture. They also introduced some game-like elements designed to the likelihood that people will come back the next day.

CBT for Seniors

I contacted the CEO of the Woebot project, Dr. Alison Darcy, and submitted a written interview to which she responded. In it I encouraged her to develop a Woebot to assist much older people with depression and loneliness. I pointed out that seniors' mental health needs differ significantly from those of college students, as the challenges of aging often involving chronic illnesses, deaths of loved ones, living alone, and feelings of irrelevance when no longer employed in the workplace.

I also pointed out that, although Medicare recognizes depression has a serious impact on quality of life and ensures that a senior's annual wellness visit includes questions about their emotional state, many seniors take friends or family members with them to the doctor. Thus, they may be embarrassed to admit to their physician that they are feeling depressed or even suicidal when their friends or family members are present—very much the same fear of stigma demonstrated by the college students. To make the problem even more difficult to address, many family physicians are not trained in dealing with mental health issues, and the best they may be able to do is refer the senior to a specialist. Appointments to visit such specialists are usually weeks away and often seniors on limited incomes cannot even afford the co-pay, a sad fact of life in the U.S. commercial health care model.

I also think the long-term caregivers may themselves need yet another type of Woebot, one that could help them to deal with their own feelings of frustration and even anger that may often crop up when dealing day-in and day-out with a patient or loved one with physical and emotional impairments.

CBT Delivery with Virtual Assistants

With the growing presence of voice-activated virtual assistants like Amazon's Alexa, I also expressed my support for porting Woebot to a voice-only interface to Darcy in my written interview with her. Many older adults are not as technology-savvy as college students and probably are not as comfortable on Facebook or a smartphone.

In their clinical analysis of their Woebot development project, Darcy and her fellow researchers apparently agreed with me in theory saying:

Theoretically, conversational interfaces may be better positioned than visually oriented mobile apps to deliver structured, manualized therapies because in addition to delivering therapeutic content, they can mirror therapeutic process. Conversational agents (such as Apple’s Siri or Amazon’s Alexa) may be a more natural medium through which individuals engage with technology. Humans respond and converse with nonhuman agents in ways that mirror emotional and social discourse dynamics when discussing behavioral health.

However, Darcy expressed reservations to me about eliminating the written aspects of therapy made possible by the messenger interface in Facebook or on a smartphone in my interview with her. Continuing to quote Darcy:

The core of what we do—the CBT skills that are triggered when someone is upset in the moment that they reach out to Woebot —is actually dependent on writing down negative automatic thoughts. This is true even in the therapist's office, because it seems to be central to externalizing the thoughts. That is, there is something in seeing your negative thoughts written down that allows you to process it in a different way, ultimately allowing it to be intervened upon (by rewriting).

I do hope she reconsiders, however. But for now, I think Woebot, even in its current iteration, could prove helpful to millions of people. I know I find confessing my deepest thoughts to a properly programmed computer application to be less troubling than revealing them to another human being, many of whom may have their own biases.

Summary and Final Remarks

The skyrocketing cost of higher education is adding to the mental toll that transition to higher education and adult life takes on modern college students. With studies that show one out of every four college students suffers from some form of mental illness, psychologists worldwide are now focused on providing mental health care to these young adults. But, the stigma that often accompanies mental health treatment remains an obstacle.

Clinical trials with computerized cognitive behavior therapy have demonstrated that CBT delivered anonymously in a computerized environment is as effective as person-to-person talk therapy in the relief of symptoms of depression and anxiety. Furthermore, because these therapy sessions are conducted without patient tracking, the fear of stigma can be eliminated. Tools, such as conversational agents like Woebot, in combination with gamification strategies, can be used to encourage students to adhere to a treatment program.

As artificially intelligent voice-activated interfaces become more widespread, computerized CBT may become part of students’ daily hygiene to help them to maintain the best outlook possible as they navigate higher education’s landscape.

References and Resources
Bickmore, T., Gruber, A., & Picard, R. (October, 2005). Establishing the computer-patient working alliance in automated health behavior change interventions. Patient Education Counseling. Abstract retrieved 4/19/2018 from
Burns, D. (1980). Feeling good: The new mood therapy. New York: Harper Collins.
Burns, D. (2006). When panic attacks. New York: Harmony.
Dweck, C. (December, 2014). The power of believing that you can improve. TED Talks. (Video, 10:20.) Retrieved 4/19/2018 from
Fitzpatrick, K.K., Darcy, A., & Vierhile, M. (April-June, 2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health. Retrieved 4/19/2018 from DOI: 10.2196/mental.7785 PMID: 28588005 PMCID: 5478797.
Holmes, L. (1/14/2015). 19 statistics that prove mental illness is more prominent than you think. Wellness. Retrieved 4/19/2018 from
Hunt, J., & Eisenberg, D. (January, 2010). Mental health problems and help-seeking behavior among college students. Journal of Adolescent Health.
Kerr, M. (3/29/2012). Depression in college students: Signs, causes, and statistics. Healthline. Retrieved 4/19/2018 from
Kessler, R.C., et al. (July, 2007). Age of onset of mental disorders: a review of recent literature. Current Opinion in Psychiatry.
Markoff, J. (3/23/2008). Joseph Weizenbaum, famed programmer, is dead at 85. The New York Times. Retrieved 4/19/2018 from
Tartakovsky, M. (7/17/2016). Depression and anxiety among college students. PsychCentral. Retrieved 4/19/2018 from
Towery, J. (2016). The anti-depressant book: A practical guide for teens and young adults to overcome depression and stay healthy. Palo Alto, CA: Jacob Towery.
Woebot (n.d.). Woebot. Retrieved 4/15/2018 from
Zivin, K., et al. (10/1/2009). Persistence of mental health problems and needs in a college student population. Journal of Affective Disorders.

MOOCs – Models for Learning in the 21st Century: Part 2

MOOCs – Models for Learning in the 21st Century:
Part 2

Mary Harrsch
Networks and Management Information Systems (Retired)
University of Oregon College of Education

This is a cross-post from the Information Age Education newsletter

In the previous newsletter, I described my experience as a learner in a MOOC hosted by FutureLearn, a UK distance education provider. MOOCs are now being used to teach both pre-college and higher education students. In this newsletter, we will explore the science behind MOOCs as evolving models for learning. 

Working Memory Capacity

Back in the 1960s, psychologists George Armitage Miller, Eugene Galanter, and Karl Pribram coined the term “working memory” to describe the human brain’s cognitive system used for temporarily holding information available for processing (Pribram, et al., 1960). Since each human being’s physical traits are determined by a unique combination of genes in their DNA sequence, each human brain has a unique capacity of working memory. However, some psychologists think genetics is only responsible for about half of this attribute. (Engelhardt, et al., 2016)

According to developmental psychologists, the developing brain’s working memory capacity (WMC) increases gradually over the course of childhood, reaches its mature level (unique to each individual) in their early twenties (American Psychological Association), then gradually decreases in old age (Salthouse, 1994).

Working memory capacity is most commonly tested by a dual-task paradigm invented by Daneman and Carpenter in 1980. (Daneman & Carpenter, 1980) Subjects read a number of sentences (usually between two and six) and try to remember the last word of each sentence. At the end of the list of sentences, they repeat back the memorized words in their correct order.

In an academic setting, WMC has been shown to be an important predictor of learning, reasoning, and comprehension (Conway, et al., 2007). But, a human’s working memory, a finite resource, is cluttered with both task-related and unrelated information at any given time. If a student is juggling multiple commitments requiring executive thought processes—processes necessary for the cognitive control of behavior—a student's ability to absorb large amounts of new information may be compromised. The increasing cost of higher education has likely increased this probability. A large majority of college students are now dependent on employment to finance their education. Based on a report released by the Center on Education and the Workforce of Georgetown University, 70% of college students (including myself at the time!) now work while enrolled (Carnevale, et al., 2015). This certainly must have an impact on their ability to maintain focus in the classroom.

So, task performance, in this case learning, is dependent upon an individual’s executive-control ability to keep the learning material being presented mentally active and accessible enough to influence the individual’s behavior (Kane & McVay, 2012).

The Importance of Attention

Proponents of executive-attention theory claim that, although individuals with lower WMC appear to suffer more from distractions created by thoughts unrelated to the task at hand, termed mind wandering, goal achievement is ultimately a product of an individual’s attention control system (Engle & Kane, 2004).

So, what is the average attention span of an adult learner?

Current researchers argue that the average attention span of American adults has dropped and it is now limited to 20, 10, or even five minutes,” says award-winning instructional designer Art Kohn. “The late educator Neil Postman believed that modern technologies such as television and the Internet are actually reducing people’s attention span. He proposed that our frantic world has somehow rewired the human brain, making us less able to attend to things for long periods. In fact, there is precedent for such a view. For example, the human eyeball, which is a sensory outgrowth of the brain, actually changes shape because of early visual experience. For instance, if a child engages in close-up activities like reading or playing computer games for prolonged periods, the human eyeball develops into a more oval shape to better accommodate these close-up images. The downside of this reshaping, however, is that the children then become myopic (nearsighted) and have difficulty focusing on distant objects.

Researchers propose a similar process to explain the shortening of adults’ attention spans (and perhaps the epidemic of attention deficit disorders in children). The theory states that because of exposure to our frantic world with its persistent thrills, challenges, and competition, a person’s brain somehow rewires itself to better accommodate this rapid pace. The downside is that same brain has difficulty focusing on the more mundane experiences of everyday life (Kohn, 2014).

Kohn also points to a new theory that claims learners, especially Millennials, have become accustomed to seeking information on an as-needed basis and are unwilling to attend to material that is not perceived as being immediately interesting and valuable. Quoting again from Kohn, “The advent of instant information has made people impatient with traditional spoon-fed training. Instead, they want to guzzle knowledge when, but only when, they need it.”

How MOOCs Address Learning Challenges

I am hardly a Millennial and not even technically a digital native. However, I think these psychological factors clearly explain my own inability to stay focused for an hour of passive listening in a traditional classroom, and also the apparent inattention of many of my much younger classmates. Like 65% of all adults, I am predominantly a visual learner. So, a lecture that has few visual components would not be presenting information in a format that I would assimilate easily.

However, FutureLearn and other organizations are now using the model of interleaved lessons, often rich with graphics and video clips, coupled with discussion forums and computerized assessment tools that provide immediate feedback. I find this format meets my needs. In addition, students can easily pause or replay segments of recorded information to review and reinforce their understanding of the information presented.

Furthermore, the chunking of information into learning experiences that incorporate a variety of activities requiring about 15 - 20 minutes of concentration per exercise, like those I encountered in my FutureLearn course, also more closely approximates the average adult attention span. In addition, the discussion questions and interactions with classmate responses provide an opportunity to reflect on the information provided and correlate it with previous learning and experiences.

Smallwood and Schooler have asserted that tasks requiring controlled processing are less likely to support mind-wandering. The rationale behind this assertion is that the scarcity of executive resources makes it hard for a person to divert actions to task-unrelated thoughts. Hence, tasks requiring a maximum degree of cognitive control are less prone to mind-wandering than those requiring minimal cognitive control (James, 2018).

The peer-to-peer discussion forums surrounding open essay-type answers used with humanities courses also provide much quicker feedback than you would get from an instructor, even one assisted by three to five teaching assistants. Daphne Kohler, co-founder of U.S.-based MOOC provider Coursera, reports that in their courses, the median response to a question posted in a lesson's global discussion forum was 22 minutes (coursera, 2018). She attributes this to the worldwide nature of student enrollments. She pointed out that, regardless what time of the day you were working on a class unit, someone somewhere else in the world was often working on that same class unit at the same time. So, students often help each other much more quickly than the faculty facilitators (Kohler, 2012).

Anant Agarwhal is another U.S.-based MOOC developer, founder of EdX, and a MOOC instructor. He agrees with Kohler, pointing out that the first peer answer may not be totally correct but, as more and more students join the discussion, a correct answer usually surfaces. Agarwhal also agrees with an MIT colleague who says timely feedback turns teachable moments into positive learning outcomes (Agarwhal, 2013).

The 5-stage Process of Learning

In fact, the process of reflection and discussion is so important to refine a learner's understanding of new material that it is included in Taylor and Hamdy's proposed 5-stage process of learning as outlined in their paper, Adult Learning Theories: Implications for Learning and Teaching in Medical Education. The quote below and the following diagram come from this article (Taylor & Hamdy, June, 2013). Notice the centrality of feedback in the diagram.

[A] discussion between individuals will increase the amount of practical knowledge, and that some things remain a mystery until we talk to someone else with a different range of knowledge or understanding. It follows that the more diverse a learning group's membership is, the more likely the individuals within the group are to learn.

Taylor and Hamdy argue that the feedback phase, where students reflect on new information, compare it to their existing knowledge and, through discussion, with the knowledge of other students, is arguably the most crucial phase. Continuing to quote from Taylor & Handy:

…it is where the learner articulates their newly acquired knowledge and tests it against what their peers and teachers believe. The feedback will either reinforce their schema, or oblige the learner to reconsider it in the light of new information.

I believe that choosing a MOOC that supports a dynamic environment for discussion and feedback with course peers is essential to gaining the most out of the learning experience.

My first MOOC was the FutureLearn course, Superpowers of the Ancient World. I have since taken seven other MOOC courses from FutureLearn and one online course from ArcGIS, a software firm that develops mapping applications for geographic information systems. The ArcGIS class was taught by software developers rather than by academic faculty and included only a single discussion pool not related to specific course exercises. I learned the material because I am particularly adept at learning to use software and also had twenty years of database design experience. But, the course itself offered little opportunity to learn from others with different backgrounds or ideas. I sorely missed the exchange of ideas and inspiration I received in the FutureLearn environment.

Summary and Final Remarks

The importance of in-depth mental processing to learning retention was recognized as far back as 1972 by psychologists Fergus I. M. Craik and Robert S. Lockhart in their foundational paper, Levels of Processing: A Framework for Memory Research, published in the Journal of Verbal Learning and Verbal Behavior (Craik & Lockhart, 1972). Their research found that information with strong visual images or many associations with existing knowledge would be processed at a deeper level and would therefore be retained much longer.

They also acknowledged that retention is further aided by recirculating information to extend attention on the new material coupled with analysis. comparisons, and elaboration. They emphasized that these processes are really necessary for students to understand and remember content.

The traditional lecture model, used for centuries in higher education as the primary teaching format, simply does not provide these opportunities. First, the length of the information presentation, usually 45 minutes to an hour, exceeds the average adult’s attention span. In addition, non-course related urgent tasks like employment or family responsibilities that are a factor for more than 70% of modern students often can compete with the learning task for a student’s attention. Craik and Lockhart point out that studies of selective attention and sensory storage have shown that non-attended verbal material is lost within a few seconds.

An effective model developed for MOOCs, on the other hand, can offer an alternative experience that addresses the restrictions of a human brain’s limited working memory capacity and individual differences in ability to sustain executive control in a distracted state. Since all participants are equipped with a computer, key concepts can easily be illustrated with multimedia, increasing the visual content for visual learners (65% of all adult learners). The computer connectives can provide forums where course material can be analyzed and compared with the existing knowledge of both the individual student, and of large numbers of classmates with vastly different life experiences.

Critics of MOOCs point to the huge number of enrollees who fail to complete their courses. But, Anant Agarwhal, founder of EdX and a MOOC instructor, explained in his June, 2013, TED Talk that even though only a little more than 7,000 out of 150,000 students who signed up for one of his classes completed it, he would have had to have taught 40 years in a traditional classroom to reach those 7,000 students (Agarwal, June, 2013). 

References and Resources
Agarwal, A. (June, 2013). Why massive open online courses (still) matter. TED Talks. (Video file.) Retrieved from
American Psychological Association (n.d.). Memory and aging. Retrieved from
Baddeley, A. (October, 2003). Working memory: Looking back and looking forward. Nature Reviews Neuroscience. 4 (10): 29-39. Commercially available online from doi:10.1038/nrn1201. PMID 14523382.
Carnevale, A., Smith, N., Melton, M., & Price, E. (2015). Learning while earning: The new normal. Retrieved from
Conway, A.R., Kane M.J., & Engle, R.W. (December, 2003). Working memory capacity and its relation to general intelligence. Trends in Cognitive Sciences. 7 (12): 547-552. Commercially available online from doi:10.1016/j.tics.2003.10.005. PMID 14643371.
coursera (2018). Take the world's best courses, online. Retrieved from
Craik, F.I.M., & Lockhart, R.S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior. 11: 671-684.
Daneman, M., & Carpenter, P.A. (August ,1980). Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior. 19 (4): 450-466. Commercially available online from doi:10.1016/S0022-5371(80)90312-6.
Engelhardt, L.E., Mann, F.D., Briley, D.A., Church, J.A, Harden, K.P., & Tucker-Drob, E.M.  (September, 2016). Strong genetic overlap between executive functions and intelligence. Journal of Experimental Psychology: General. 145 (9): 1141-1159. Commercially available online from
Engle, R.W,, & Kane, M.J. (2004). Executive attention, working memory capacity, and a two-factor theory of cognitive control. In B. Ross (ed.), The psychology of learning and motivation. New York: Academic.
James, H.J. (n.d.). Attention span in adults. Retrieved from
Kane, M, & McVay, J. (2012). What mind wandering reveals about executive-control abilities and failures. Current Directions in Psychological Science. 21 (5): 348-354. Retrieved from
Kohn, A. (2014). Brain science: focus – Can you pay attention? Learning Solutions. Retrieved from
Koller, D. (June, 2012). What we're learning from online education. TED Talks. [Video file.] Retrieved from
Pribram, K.H., Miller, G.A., & Galanter, E. (1960). Plans and the structure of behavior. New York: Holt, Rinehart & Winston.
Salthouse, T.A. (1994). The aging of working memory. Neuropsychology. 8 (4):535-543. Retrieved from
Smallwood, J., & Schooler, J. (2006). The restless mind. Psychological Bulletin. 132: 946-958. Retrieved from
Taylor, D.C., & Hamdy, H. (September 4, 2013). Adult learning theories: Implications for learning and teaching in medical education. AMEE Guide No. 83. Med Teacher. Retrieved from

MOOCs – Models for Learning in the 21st Century: Part 1

MOOCs – Models for Learning in the 21st Century:
Part 1

Mary Harrsch
Networks and Management Information Systems (Retired)
University of Oregon College of Education

This is a cross-post from the Information Age Education newsletter

Introduction to MOOCs

Reading and writing were developed about 5,200 years ago (History World, n.d.). With these new cognitive tools, information could be stored over time and transported over distances. Moreover, these were powerful aids to helping one’s brain solve complex problems. This technology changed our world.

In addition, reading and writing changed education. Prior to that time, education was essentially an apprenticeship activity, learning by doing and by imitating others who were doing. The development of reading and writing led to the development of schools in which a group of students came together to be taught reading, writing, arithmetic, and local history.

The traditional school model of learning in which groups of students were taught by recognized scholars remained only modestly changed for over 5,000 years.

Then, information storage and processing was revolutionized by the invention of the computer, followed by networks of computers, artificial intelligence, and the World Wide Web. These world-changing and education-changing technologies have upended the time-honored traditional school model.

Many courses based on this modern technology are called Massive Open Online Courses (MOOCs). The first really large enrollment MOOC was run by Stanford University in 2011 (Moursund, 12/30/2015). In this and the next IAE Newsletter, I will present my MOOC experiences and some of the insights I have gained into this new mode of teaching and learning.

The Sage on the Stage

Much of my learning in higher education was obtained after I became an adult. Family responsibilities required most of my attention during my early adult life, so my enrollment in higher education courses occurred sporadically over a number of years. However, I still remember how eagerly I anticipated one of my first learning experiences in a mid-sized university classroom. I have been passionately interested in archaeology since I was a young girl, so I was confident I would find the content fascinating and hoped to learn a great deal. I walked into the lecture hall and found myself in the midst of hundreds of students, most much younger than I was, who had also enrolled in the course. I found a seat close enough to hear the instructor well and to be able to see any examples he might augment with audio-visual materials.

I knew nothing about my other classmates, and really nothing much about the instructor except that his research was focused on stone-age tribes in the South Pacific. The professor entered and began a lecture that lasted for almost an hour. Whenever the instructor paused and posed a question, hardly anyone except me even raised their hand to respond. After almost an hour of this, I became increasingly hesitant to participate because I did not want to appear to be dominating the discussion. Then it became a challenge to even stay awake.

Sadly, my experience was not unusual, as I found it repeated in other courses I took with other instructors. Passively listening to a "sage on the stage" was not my cup of tea. I succeeded because I was a conscientious student who did all of my homework and knew how to cram for mid-terms and finals. But, I'm not sure I retained much of the information provided, and I certainly found the experience less than optimal. However, eventually I did manage to become an education technologist, although my career path was far from conventional.

Developing My Career in Education Technology

I saw my first personal computer at a trade show in the early 1980s. It was produced by a budding new company founded just a few years earlier by Steve Jobs, Steve Wozniak, and Ronald Wayne. The first computer I purchased was an Apple IIe. I opted for extra memory when I purchased it so it had a total of 128K (notice I said K not MB!). I also bought a selection of basic software including a word processor, an early spreadsheet application, database software, and a financial accounting package. I paid the rather hefty sum of over $6,000 in total. My husband and I were running a large agribusiness operation, though, and the computer made it possible for me to more easily evaluate different crop planting and marketing scenarios.

There were no classes available to learn how to use the PC or its software, so I spent hours with the manuals learning each software package and how the personal computer itself worked. I was able to find a book on the BASIC programming language and studied it as well.

The next three paragraphs summarize my computer technology career path. Notice how rapidly the field was changing during this time.

I was hired by a multi-state restaurant franchise that used MS-DOS-based PCs. There I made the transition to the new operating system and was able to develop one of the first point-of-sale inventory management systems for them. I also developed an immigration compliance tracking system and a program to analyze employee turnover and retention. Later, I implemented an in-house market research program using software I had found in my analysis of emerging technologies. This program eliminated the need for outside contractors that had cost the company hundreds of thousands of dollars over the years.

My next position was as a fiscal manager with the College of Education at the University of Oregon. In addition to managing a budget of more than $20 million dollars annually in academic funds and overseeing the expenditure of millions more in research grants, the Dean explained he wanted me to computerize the college’s accounting functions and, when that was completed, implement a local area network for the entire college.

I accomplished the Dean’s goals within a year, and also integrated the College of Education’s network into the University’s rapidly expanding Wide Area Network. After that came the transition to computers with a graphical interface, the introduction of the World Wide Web, the implementation of streaming services, and limited development of distance education resources. I also evaluated emerging technologies for educational use, including voice recognition and artificial intelligence, developing a prototype for a virtual professor that I hoped would eventually help faculty manage their office hours’ responsibilities (Harrsch, 2005). I have been surprised that it took almost fifteen more years before artificial intelligence finally began to be introduced into the mainstream. 

MOOC Learning in My “Second Act”

I retired in 2008, after twenty years of managing the College’s networks and management information systems. I realized I finally had the time to seek out learning experiences in ancient history, a subject I had been passionately interested in since a child, rather than continue to focus on courses that could advance my career. This time, however, with a comfortable home office and a high-speed internet connection, I chose to enroll in online MOOCs (Massive Open Online Courses). I found this experience to be much more intellectually invigorating than my initial introduction to higher education.

My first MOOC course was Superpowers of the Ancient World: The Near East. It was presented by a team of faculty members at the University of Liverpool through the UK's online FutureLearn program.

FutureLearn is a private company owned by The Open University, an institution of higher education with more than forty years of experience in distance learning and online education (Open University, 2018). FutureLearn launched their first courses in September, 2013, and has served more than seven million people since then.

Other companies offering MOOC courses include Coursera and EdX. Coursera is a U.S.-based online organization offering courses developed by 161 universities and corporate partners in the U.S. and around the world. EdX is an organization founded by Harvard University and MIT in 2012, and now works with 90 global partners.

All of these MOOC providers offer free online classes. A certificate for proof of your successful completion of a course (that requires minimum scores on quizzes and tests) is available for a charge of $50 - $60. I have always purchased a certificate and posted the digital document to my LinkedIn profile. This not only compels me to treat the course as a serious learning endeavor, but also provides evidence that I am a serious scholar. I have published a number of papers on various aspects of ancient culture, particularly Roman civilization, and I wanted my readers to have some assurance that my work is authoritative. I also think it is important that we support these institutions in developing such innovative MOOC learning models. 

Why FutureLearn?

Why did I choose FutureLearn instead of one of the U.S.-based organizations? Well, first of all, FutureLearn offered courses exploring ancient history and civilizations that were not offered by the U.S. companies. In addition, there are definite differences in their course structure.

Although I have never personally taken a course from either Coursera or EdX, I found an article in Forbes written by a student of multiple Coursera courses (Shah (12/5/2013). She describes video lectures of 20-to-30 minutes, each accompanied by quizzes and problem sets. She mentions a general student forum where students are encouraged to seek help, mostly from other students.

To me, this sounded like watching non-interactive episodes of The Great Coursesfollowed by a graded test (The Great Courses, n.d.). This did not seem to be much different from a typical college classroom, replacing the “sage on the stage” with a recorded talking head and little to no student interaction. I currently own many Great Courses and have learned a great deal from them by independent study, but I was looking for an experience where I could interact with other students and faculty who would be as passionate about the subject as I am.

Ronny De Winter, a TA for courses on Coursera, states in a Quora post (DeWinter, 12/15/2013):

Extremely diverse starting points for students create a chaotic forum experience [on Coursera]. The scale of enrollment can create huge noise and a very low SNR (signal to noise ratio).

Winter points out that course discussions monitored by diligent teaching assistants are more focused. They can improve clarity by introducing thread titles and bringing attention to particular discussion topics. Off-topic discussions can be deleted.

In contrast, the discussion forums on FutureLearn are keyed to individual course exercises and focus the discussion on a specific topic, eliminating much of the noise created by funneling all questions for all exercises in an entire course into one cacophonous pool.

The FutureLearn Experience

When you enroll in a FutureLearn course, you are encouraged to introduce yourself in a welcome forum, explain your background, and tell why you are interested in the subject. If you find other students particularly interesting, you can "follow" them so their responses to questions posed during the course can easily be isolated and read.

In the first course I took, Superpowers of the Ancient World: The Near East, Dr. Glen Godenho provided a video introduction to himself and the course, then introduced his graduate faculty facilitators who would be actively participating in class discussions. Dr. Godenho also participated in class discussions when time permitted. Most of the faculty facilitators in the UK assisting a professor with a course are graduate students working on their PhDs in a field related to the subject, very similar to Graduate Teaching Fellows in the U.S. Sometimes, a professor may also be assisted by other full professors. In Superpowers of the Ancient World: The Near East we had a segment on ancient music and a full professor of musicology facilitated that segment along with Dr. Godenho.

As I began the course, I found that each course was composed of about 20 exercises per week that chunked information into activities requiring about 20 minutes each for completion. (Being an extrovert, I probably spent more time in discussion with other students than the average, though!) I also spent time exploring suggested optional supplemental content. The total amount of work per week expected on the part of students roughly equates to that in a traditional 3-credit graduate course in which students are expected to work three hours outside of class for each hour in class, for a total of four hours a week per credit.

Exercises included reading components as well as maps, timelines, and short videos or audio interviews with other content experts. Online applications were also included to practice such tasks as deciphering hieroglyphs or cuneiform inscriptions. Some of the exercises also tasked students with identifying modern events or practices that may offer insight into ancient thinking.

Learning from Fellow StudentsWhat I liked best about the class was the online interaction with other students and with faculty facilitators. Each exercise included questions that each student was required to answer based on their understanding of material provided and their individual perceptions or experiences. These answers appeared in an exercise discussion thread similar to the format used with social media applications like Facebook. The FutureLearn system limited responses to 1,200 characters. At first I found this a little frustrating, but eventually realized that it helped me to rethink my answer in my attempt to be as concise as possible.

Students were encouraged to read at least one page of responses from their classmates and to "Like" and/or "Reply" to student passages to express why they agreed or disagreed with them. Because MOOCs often involve thousands of students per class, yielding pages and pages of discussion, you could choose to filter the discussion to view only the comments of people you were following.

The FutureLearn course management system had an internal notification system that constantly notified you of anyone who either "Liked" or “Replied” to any comments you made in your course profile. In addition, you could opt to receive e-mail digests.

The number of likes an answer received also helped instructors to pinpoint the answers most readily accepted as correct by the students. If an answer cluster demonstrated a shared misconception, the instructor could clarify the correct response and explain why the apparent accepted response was inaccurate. The instructor could then decide to supplement the exercise with additional materials and/or to change the exercise content to make the concept more easily understood for future students. This level of community engagement was not described by the previously mentioned Coursera student and, for me, it had a significant impact on my learning and maintaining my interest level.

Dr. Godenho also set up a course Facebook page so we could share information we found outside of class about the ancient Near East that was not directly related to a particular exercise. Some other courses I have taken have used Twitter with a specified hashtag for this, but I find a Facebook group discussion can more easily be followed and is without a character limit.

The Importance of a Class Cohort

Students are told they may progress at their own pace. But the course is designed to be completed over a defined period of weeks. If you complete the course in the defined period, you are assured to have interaction with faculty facilitators and a core group of students, usually those who are comfortable with scheduling their class participation into their daily lives. If you decide to study the material over a longer period of time, you will definitely miss out on the faculty feedback and on most if not all of the more productive peer-to-peer discussions.

I was very fortunate that my "class cohort" was very comfortable with using both the discussion forum and the Facebook group, so we had many lively discussions. The course Facebook group is ongoing, and even though I took the class two years ago I still post items of interest to it.

Overall, I found this type of learning experience to be far superior for me than the passive lecture hall presentations of a traditional higher education setting. But why?

In the next newsletter we will examine psychological factors that influence a student’s learning capacity and attentiveness, and how MOOCs can be designed to optimize the learning experience.

References and Resources
DeWinter, R. (12/15/2013). What are the downsides of Coursera’s discussion forums? Quora. Retrieved from
Harrsch, M. (2005). Extending the learning environment: Virtual professors in education. Retrieved from
History World (n.d.) History of writing. Retrieved from
Moursund, D. (12/30/2015). MOOC enrollment continues to grow. IAE Blog. Retrieved from
Open University (2018). Wikipedia. Retrieved from
Shah, M. (12/5/2013). What is it like to take a Coursera course? Forbes. Retrieved from
The Great Courses (n.d.). Retrieved from

Saturday, January 27, 2018

Making Smart Choices When Selecting Smart Home Devices

A technology resource article by Mary Harrsch © 2017

As someone who researched and developed some early conversational agents back in the 1990s, I am still fascinated by artificially intelligent technology and excited by the plethora of gadgets now being marketed with artificial intelligence driving their systems and their user interfaces. But I admit, I am a bit disappointed by the development choices being made by some product manufacturers because they seem to be more interested in the appearance that their product lines are cutting edge because they possess some implementation of artificial intelligence rather than whether the product really solves a pressing human problem in that particular sphere.

For example, at CES 2018, Samsung showcased their Family Hub smart refrigerator. It is equipped with cameras and claims it can assess the contents of your refrigerator, recommend recipes, and even allow you to shop for groceries without leaving your kitchen. Sounds great doesn't it? But how realistic are these claims. If you have a lot of left overs do you have to use coded containers so the refrigerator can figure out what contents are within them? Can the cameras scan the contents of opaque packaging so the refrigerator can determine if you're getting low on a particular item? Or are most of these claims based merely on the refrigerator's new Bixby virtual assistant that you can tell to add milk to your shopping list or ask what recipe could use leftover ham, zucchini and eggplant?

Samsung Family Hub refrigerator image courtesy of Samsung, Inc.
As it turns out, based on the marketing claims, I thought  the refrigerator was smarter than it really is. The remotely accessible camera only acts like a web cam. There is no artificial intelligence using scans to recognize food items or recording when food items are initially added to the refrigerator so it can keep track of food's freshness. As for the recipe recommendations, the refrigerator is just using its intelligent agent Bixby to come up with those. If that's the case, then I must ask why you would spend over $4,500 for that refrigerator (about twice as much as a traditional refrigerator) when a standard model with an Amazon Echo Dot, Google Home Assistant or some other relatively inexpensive stand alone virtual assistant can accomplish most of those tasks for less than an additional $50?

The Samsung unit also has AKG premium quality sound speakers in the doors, a whiteboard for notes, and a built-in screen to view baby monitors, front doors, or status screens of other smart devices.

“The integration of Bixby and SmartThings into the Family Hub is bringing a new level of intelligent connectivity into the room where people spend the most time:  the kitchen.” - Samsung corporation.

Perhaps this last statement by Samsung points to the crux of the problem. In our house, we are in the kitchen only about 30 minutes before a meal (prep) and 30 minutes after a meal (cleanup). Being retired we seldom have guests so the meal itself lasts about 15 - 20 minutes. (My husband was a Marine so you sit, eat, withdraw!) At present, I have a typical galley kitchen adjacent to a more spacious dining room. If there is any lingering it will take place in the dining room, not the kitchen.

I own a traditional side by side refrigerator/freezer and have an Amazon Echo Dot on the kitchen window sill. When I pour a glass of milk and notice I'm getting low on milk I just call out to Alexa to put milk on my shopping list. If I have leftover Polish sausage in the refrigerator I can ask Alexa for a recipe using Polish sausage. (If I had an Echo Show, she could show me a recipe that I could then refer to as I prepared the dish.) If I want music, I tell Alexa to play one of my Amazon Music playlists. If I still had kids at home and wanted to tell them to clean their rooms when they get home from school, I could set a repeating reminder at an appropriate time on the appropriate Echo device (Alexa reminders are location specific).

If I was still working, it might be helpful to take a peek into my refrigerator before I shop for groceries on the way home from work but my Alexa shopping list on my iPhone that tracks my supply needs throughout the week is much more comprehensive.

If the refrigerator's cameras are eventually paired with intelligent scanning capability so it could recognize food items and record the date they were placed in the refrigerator so it could advise you of the status of food freshness, then the jump in price might be truly worth it from a usefulness perspective but not with its current limited capabilities.

Luckily, there is another smart device headed for the market that may take care of this need, though. Ovie Smarterware produces food containers with smart trackers that indicate when the food in your fridge is on the verge of going bad. The trackers work with a variety of virtual assistants from Amazon, Google, and Apple. When you are putting new food items into these containers you tell your assistant to open the Ovie app then press the container's tracking button and say what is in the container such as "This is lasagne". Then as the lasagne ages in the refrigerator, the tracker color changes from green to yellow to red so a quick glance lets you know what food items need to be used up (or thrown out!).

 In addition to containers, the company also makes bands and chip clips with trackers and is working with the FDA to develop an accurate database of food expiration periods. This product is obviously the result of a company truly attempting to solve a very big problem with technology. Americans throw away billions of pounds of food every year. However, whether consumers will be willing to invest in and make the effort to use this product regularly  remains to be seen. If Ovie's marketing people can appeal to those of us conscientious enough to clean our recyclables and put them in appropriate containers for disposal maybe they can pull this off.

What about other smart kitchen appliances? Although it might sound great to have your virtual assistant brew a cup of coffee while you're getting dressed, the bottom line is someone must keep the coffeemaker topped off with water unless you plumb your coffeemaker with water and provide a smart tap that opens and closes to dispense the appropriate amount of water needed to fill the coffeemaker before the scheduler tells it to brew.

The same can be said for intelligent slow cookers.Someone must put ingredients in the slow cooker before you schedule it to come on at an appropriate time. Raw meat and some other ingredients also don't keep well for extended periods at room temperature. If a slow cooker could switch from chill to heat then scheduled to cook for the appropriate time based on when you wished the food to be ready, that would be a slow cooker that would get my attention.

Even all of the wonderful lighting products I've seen have limitations. Most of the smart wall switches currently on the market require a neutral wire that was not common in home wiring until 2011. The few switches that do not require a neutral wire usually require a hub in addition to the bulb so you end up paying more for them and have to configure yet another device to connect them. I have been able to use Wemo smart plugs to connect all of my living room lamps, though, and can easily turn them all on and off with a couple of words. Still I would like to integrate my overhead lights and porch lights into my voice-managed system.

But were there other devices clearly solving a human problem? Well, I think Kohler's smart bathtub would be a good choice. Running a bath does take time and having both the depth of the water and water temperature preset is particularly helpful for individuals who may have diminished sensory perception. Years ago my car heater malfunctioned on my way home in the middle of a blizzard. Although I tried to keep my hands warm by placing one and then the other under my armpit, by the time I got home 30 minutes later I could barely feel my hands and feet. I went into the bathroom to run a tub of warm water and couldn't feel if the water was hot or cold. Seniors, especially those suffering from neuropathy, would really benefit from this type of tub, besides the efficiency of having the tub run while you are doing something else. At present, though, I personally have no need to talk to my toilet or ask it to warm up the seat before I settle down onto it. So I would not consider spending extra money for that part of the smart bathroom.

Another gadget promoted at CES that could be useful, especially to seniors, is a pocket-sized LinkSquare spectrometer.  This little device when paired with your smartphone captures how a substance's molecules vibrate, an optical fingerprint that reveals whether food is safe to consume or spoiled. In her later years, my mother's sense of smell diminished to the point where she could no longer tell if food had spoiled or not. This kind of device would have been very helpful to her. This gadget can also identify mislabeled and diluted liquor, detect counterfeit and mislabeled drugs, and detect counterfeit money, very helpful for those working as cashiers. I think the $299 price tag would need to come down substantially, though, before it would find its way into common use.

I'm already convinced smart TVs are truly helpful as well. In our house we have a large screen HD television in the living room connected to an Alexa-enabled DISH satellite receiver and a smaller HD TV in the dining room facing the dining table. I don't have to look for one of a handful of remotes to change channels or find a particular movie or television show as I have each TV controlled by their nearby Echo Dots. I use Wemo wifi-enabled smart plug between the TVs and the power outlets  to remotely control the on/off switches. But, there are features could prove useful on a voice-enabled TV.  I would really like to control the volume of my Polk sound bar in the living room remotely and be able to remotely change my video inputs so I could access my Roku and my Blu-Ray player without shuffling remotes, too. The newer Samsung smart TVs auto-detect devices attached to their HDMI outlets and allow you to control them accordingly. But, then I'd have to give up my 3D capability!

Although the voice features of my DISH Hopper 3 are really great I wish it could also let me join whatever program is in progress in the living room by letting me simply say something like "Join living room program" so my husband can easily continue watching in the dining room whatever he has been enjoying in the living room without me having to pick up the satellite remote for the dining room TV and selecting Options -> TV viewing -> Living Room, etc.

Voice enabling lights, locks, appliances and televisions can be incredibly convenient. But I hope you'll consider how useful the technology actually is before paying substantially more for whatever product you're considering.

Monday, November 20, 2017

Making an existing website responsive to mobile devices

A technology resource article by Mary Harrsch © 2017

Twelve years ago I built a website for an arts foundation down in southern California that showcases the work of American sculptor George S. Stuart. The past few years the sponsors of the website have been pressuring me to redesign the site to make it adjust for mobile devices, even though I've been retired for the past nine years. I finally gave in and told them I would do what I could but responsive design was developed years after I retired and I had not kept up with those developments since I no longer design websites for a living.

One of the sponsors sent me a link to a YouTube video produced by South African programmer  Quentin Watt that demonstrates in very simple terms how to replace fixed tables with adjustable divs, sections, and asides. I found it very helpful as a starting point:

From it I learned that you must include a very important meta tag in the head section of your page:

Then Quentin proceeded to show how you could use CSS styles and a media query to define style changes between a computer-sized view of the website and a view with a much smaller mobile device. His recommendation to use the Responsive Developer Tools in Firefox was very helpful.
Following his method of converting tables to divs with sections and asides, I was able to get my page(s) to resize but not collapse into a single column when the device width fell below a specified device size using his media query example.

Also, I wasn't sure how to handle my dynamic PHP-generated tables since I could not assign the contents to unique container ids because the number of content elements changes each time the page is retrieved based on which choices are made by the visitor. Then I found a very helpful example document on the website:

By studying the page source code, I saw that the table code was left intact and a media query was used to manipulate all table, tr (table row), and td (table cell) attributes without changing each element to a div container and assigning a unique id to it.

Using this media query example, I was able to collapse my tables to a single column once the device width fell below 600 px, the minimum size I specified for full view display. This was great for pages with large images in tables with accompanying information in an adjoining table cell. I wasn't using table headings, though, so I removed the style instructions for them. I also noticed that when the tables collapsed to a single column, there was a lot of space around the images. I wanted the images to display as large as possible so I removed all the padding style references from the td elements. To emulate padding, I reduced the size of the images to be slightly less than the element size. If the td element was 100% I specified the images to display at 98% to provide padding between them if two were displayed side by side in a larger view. This also resolved a problem with a graphic I needed to align right but slightly nipped into the border when I did so.

But I did not want my navigation bars to collapse so I worked out a hybrid of the two responsive concepts. I split my rather long navigation bars into two rows then converted them to div containers with sections and asides so they would resize without collapsing.

Related style code:

Other points to consider:

The website I was working on was originally designed with an early version of Dreamweaver. In the version used, Dreamweaver would automatically insert width and height dimensions whenever you inserted an image. I had to examine each page of the website and scrub out all of these dimensional references to make the site display properly in responsive mode.

I also had tables of images and text where the text was displayed in the left cell and the image in the right cell. When the table was collapsed, it was confusing for a viewer as it would appear the text was referring to the image above it. To remedy this problem, I had to move the images into the left cell and the text to the right cell. In my case that entailed moving Php code elements in a looping script, a bit challenging since my Php skills are now a bit rusty!

I also added a 1 px border to the tr (table row) element style in the media query so each discreet information set, the image and its related text, would be boxed in the collapsed single column mode.

In Quentin's div container method, he had specified images to display at 100%. I had to modify that to a max-width of 100% to prevent images from expanding beyond their original size if the page was opened on a large monitor.

We also ended up redoing some long headline graphics making the font larger and breaking the headline into two lines to reduce the amount of resizing when displayed on smaller devices.

I used an overall containment div to restrict the page elements to a maximum centered display of 960 px to ensure visual integrity of the page design on large monitors. I used 320px as my minimum device size. Quentin said this is about the smallest size of devices dating back to the iPhone 3 period.

I had a row of four 40pxX40px icons for the gallery's social network sites. These could fit without resizing into my minimum width of 320 px so I used the div method along with style instructions to display the container at a minimum width of 200 px so the icons would display at full width with 40 px of space allocated for emulated padding.

I also used a combination of an external stylesheet and embedded stylesheet for each page. Some of the styling for common elements could be used in the related stylesheet but each page was unique in content and formatting. Rather than use site-wide unique container ids, I just used the same container ids for common elements like the nav bars and social network icons, then numbered the other containers with numbers unique only to that page. For example, my navbars were always numbered containers 10 - 15 and my row of social network icons were always in container 9 but the other containers on the page were 1 - 8, each with distinctive styling instructions for that page only. So, container 6 on one page would have different styling instructions than container 6 on another page. In my opinion, this made troubleshooting an individual page easier than looking down through dozens of container numbers in a related style sheet.

I ran into problems troubleshooting the pages in different browsers, too. Google Chrome invariably would not refresh the related stylesheet when I made additions to the related stylesheet. I researched the issue and discovered this has been a problem with Chrome for quite some time. Some programmers said the fix was to use a full URL reference to the related style sheet instead of a relative reference.  This "fix" didn't always work, though. Out of frustration I just ended up embedding the new font styles in the embedded CSS instead.

Microsoft Edge was also stricter on code interpretations. If I had an element in a container div that did not have any related information in an adjoining aside, I tried using just a div command with id without any section code or id. Chrome and Firefox had no problem with this but Microsoft Edge would ignore my center alignment commands and align left the element (the default alignment) instead. So, I had to ensure that whenever I included an element in a container div, it had at least a section tag and id as well.

As I wanted to ensure no spurious code was added back into the pages by the HTML editor, I just used Notepad for editing without the aid of Dreamweaver. This, of course, produced its own issues since a single missing quote or brace could wreak havoc on your page display. There are now excellent tools for responsive web design including Coffee Cup's Designer that uses Bootstrap and Foundation frameworks to avoid hand-coding as I did. However, I was working on an existing website with hard-coded tables and dimensions called by PhP and I didn't think these tools would be that helpful in such circumstances. They are also rather costly ($189) and the nonprofit's resources were scant. However, if you design such websites on a regular basis I would suggest investing in these tools.
Additional suggested reading: