Showing posts with label conversational agents. Show all posts
Showing posts with label conversational agents. Show all posts

Wednesday, May 16, 2018

Improving College Students’ and Others’ Mental Health with Conversational Agents

Improving College Students’ and Others’ Mental Health with Conversational Agents

Mary Harrsch
Networks and Management Information Systems (Retired)
University of Oregon College of Education


This is a cross-post from the Information Age Education newsletter

Mental illness is common in the United States. About one in four adults suffers from some form of mental illness in a given year (Holmes, 1/14/2015).

This level of occurrence is even higher for college students—perhaps as high as one in two according to the article, Delivering Cognitive Behavior Therapy to Young Adults with Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial (Fitzpatrick, et al., April-June, 2017.) In a related article, Depression and College Students, Michael Kerr points out that financial worries due to high debt and poor employment prospects coupled with failed relationships, lack of sleep, poor eating habits, and not enough exercise frequently result in the development of depression (Kerr, 3/29/2012). There are also many life transitions and challenges to a student’s identity. Quoting from Margarita Tartakovsky’s article, Depression and Anxiety Among College Students (Tartakovsky, 7/17/2016):

…college calls for a significant transition, where “students experience many firsts, including new lifestyle, friends, roommates, exposure to new cultures and alternate ways of thinking,” observes Hilary Silver, M.S.W., a licensed clinical social worker and mental health expert for Campus Calm.
….
Adjusting to college also influences identity — a phenomenon Silver has termed Identity Disorientation. “When students head off to college, the familiar people are no longer there to reinforce the identity these students have created for themselves.” This can make students “disoriented and feel a loss of their sense of self,” contributing to symptoms of depression and anxiety.

Many of these college students do not seek mental health services. Referring again to the Fitzpatrick, et al., article (April-June, 2017):

…up to 75% of the college students that need them do not access clinical services. While the reasons for this are varied, the ubiquity of free or inexpensive mental health services on campuses suggests that service availability and cost are not primary barriers to care. Like non-college populations, stigma is considered the primary barrier to accessing psychological health services.

As described in this article, in their effort to overcome this fear of stigma Stanford researchers developed a virtual therapist, or conversational agent (often called a chatbot). The chatbot employs artificial intelligence and natural language processing to deliver cognitive behavior therapy (CBT) to college students self-identified as suffering from significant depression and/or anxiety.

Stanford's virtual therapist is named Woebot. Like many chatbots, Woebot uses Natural Language Programming to process student responses to questions posed by the virtual therapist, then guides the conversation to an appropriate node of a decision tree to provide suggested actions.

The Original Chatbot


Chatbot software was originally based on the "Eliza" virtual therapist that was developed back in the early 60s by Professor Joseph Weizenbaum at the Massachusetts Institute of Technology (Markoff, 3/23/2008). I studied "Eliza" in the late 90s and used it as a model for a virtual professor I developed when I worked at the University of Oregon. I was so excited to see that someone had finally recognized the potential of artificial intelligence to help people cope with life's challenges!

Dr. Weizenbaum's “Eliza” virtual therapist was initially designed to simply keep a conversation going between his chatbot and a human to see if the human could figure out they were talking to a computer and not a real person. However, Stanford's Woebot chatbot uses the scientific principles of cognitive behavior therapy to encourage its human "friends" to develop a positive mindset and overcome depression. Quoting again from the Woebot clinical trials report by Fitzpatrick, et al. (April-June, 2017):
  • "Psychoeducational content was adapted from self-help for CBT. Aside from CBT content, the bot was created to include the following therapeutic process-oriented features:
  • Empathic responses: The bot replied in an empathic way appropriate to the participants’ inputted mood. For example, in response to endorsed loneliness, it replied “I’m so sorry you’re feeling lonely. I guess we all feel a little lonely sometimes” or it showed excitement, “Yay, always good to hear that!”
  • Tailoring: Specific content is sent to individuals depending on mood state. For example, a participant indicating that they feel anxious is offered in-vivo assistance with the anxious event.
  • Goal setting: The conversational agent asked participants if they had a personal goal that they hoped to achieve over the 2-week period.
To engage the individual in daily monitoring, the bot sent one personalized message every day or every other day to initiate a conversation (ie, prompting). In addition, “emojis” and animated gifs with messages that provide positive reinforcement were used to encourage effort and completion of tasks.

A Chat with Woebot


Woebot is now freely available online (Woebot, n.d.). On the Woebot website, you can click on a link that connects you and Woebot to a private Facebook Messenger session that no one else can see. Then Woebot talks with you about how you are feeling and how you can keep a positive frame of mind using techniques from cognitive behavioral therapy. I've had talks with Woebot about those pesky "should" statements, discussions about self-defeating "all-or-nothing" viewpoints, the futility of trying to predict other people's reactions, and the importance of self-compassion. Sometimes the little bot even provides interesting short videos about behavioral research.

One that I found particularly interesting was Carol Dweck’s video about the problem of students who develop a fixed mindset when they are praised as "you're so smart" from a young age. I strongly recommend this excellent 10-minute video (Dweck, December, 2014).

After your initial session, Woebot then contacts you each day through Facebook Messenger and engages in a short friendly conversation. This can teach you how to identify your strengths, to mentally rework your own internal dialogue to develop a healthier opinion of yourself, and to recognize negative approaches in your relationships with others. If you wish to talk to Woebot about a specific problem, you can also initiate a conversation like you would with any of your friends on Facebook Messenger. Woebot is also available as a free smartphone app in the Apple or Google Play Stores.

Using Gamification to Combat Poor Adherence


In their article cited earlier, Fitzpatrick, et al., note that other psychologists have been experimenting with computerized CBT, but that motivating patients to continue interaction with computerized CBT tools has been challenging:

In recent years, there has been an explosion of interest and development of such services to either supplement existing mental health treatments or expand limited access to quality mental health services. This development is matched by great patient demand with about 70% showing interest in using mobile apps to self-monitor and self-manage their mental health. Internet interventions for anxiety and depression have empirical support with outcomes comparable to therapist-delivered cognitive behavioral therapy (CBT). Yet, despite demonstrated efficacy, they are characterized by relatively poor adoption and adherence.

To address these problems of adherence, Woebot's team of researchers adopted the "daily dose" model, since online learning studies have shown small doses of learning embedded in every day learning appears to be more effective than one lecture. They also introduced some game-like elements designed to the likelihood that people will come back the next day.

CBT for Seniors

I contacted the CEO of the Woebot project, Dr. Alison Darcy, and submitted a written interview to which she responded. In it I encouraged her to develop a Woebot to assist much older people with depression and loneliness. I pointed out that seniors' mental health needs differ significantly from those of college students, as the challenges of aging often involving chronic illnesses, deaths of loved ones, living alone, and feelings of irrelevance when no longer employed in the workplace.

I also pointed out that, although Medicare recognizes depression has a serious impact on quality of life and ensures that a senior's annual wellness visit includes questions about their emotional state, many seniors take friends or family members with them to the doctor. Thus, they may be embarrassed to admit to their physician that they are feeling depressed or even suicidal when their friends or family members are present—very much the same fear of stigma demonstrated by the college students. To make the problem even more difficult to address, many family physicians are not trained in dealing with mental health issues, and the best they may be able to do is refer the senior to a specialist. Appointments to visit such specialists are usually weeks away and often seniors on limited incomes cannot even afford the co-pay, a sad fact of life in the U.S. commercial health care model.

I also think the long-term caregivers may themselves need yet another type of Woebot, one that could help them to deal with their own feelings of frustration and even anger that may often crop up when dealing day-in and day-out with a patient or loved one with physical and emotional impairments.

CBT Delivery with Virtual Assistants

With the growing presence of voice-activated virtual assistants like Amazon's Alexa, I also expressed my support for porting Woebot to a voice-only interface to Darcy in my written interview with her. Many older adults are not as technology-savvy as college students and probably are not as comfortable on Facebook or a smartphone.

In their clinical analysis of their Woebot development project, Darcy and her fellow researchers apparently agreed with me in theory saying:

Theoretically, conversational interfaces may be better positioned than visually oriented mobile apps to deliver structured, manualized therapies because in addition to delivering therapeutic content, they can mirror therapeutic process. Conversational agents (such as Apple’s Siri or Amazon’s Alexa) may be a more natural medium through which individuals engage with technology. Humans respond and converse with nonhuman agents in ways that mirror emotional and social discourse dynamics when discussing behavioral health.

However, Darcy expressed reservations to me about eliminating the written aspects of therapy made possible by the messenger interface in Facebook or on a smartphone in my interview with her. Continuing to quote Darcy:

The core of what we do—the CBT skills that are triggered when someone is upset in the moment that they reach out to Woebot —is actually dependent on writing down negative automatic thoughts. This is true even in the therapist's office, because it seems to be central to externalizing the thoughts. That is, there is something in seeing your negative thoughts written down that allows you to process it in a different way, ultimately allowing it to be intervened upon (by rewriting).

I do hope she reconsiders, however. But for now, I think Woebot, even in its current iteration, could prove helpful to millions of people. I know I find confessing my deepest thoughts to a properly programmed computer application to be less troubling than revealing them to another human being, many of whom may have their own biases.

Summary and Final Remarks

The skyrocketing cost of higher education is adding to the mental toll that transition to higher education and adult life takes on modern college students. With studies that show one out of every four college students suffers from some form of mental illness, psychologists worldwide are now focused on providing mental health care to these young adults. But, the stigma that often accompanies mental health treatment remains an obstacle.

Clinical trials with computerized cognitive behavior therapy have demonstrated that CBT delivered anonymously in a computerized environment is as effective as person-to-person talk therapy in the relief of symptoms of depression and anxiety. Furthermore, because these therapy sessions are conducted without patient tracking, the fear of stigma can be eliminated. Tools, such as conversational agents like Woebot, in combination with gamification strategies, can be used to encourage students to adhere to a treatment program.

As artificially intelligent voice-activated interfaces become more widespread, computerized CBT may become part of students’ daily hygiene to help them to maintain the best outlook possible as they navigate higher education’s landscape.

References and Resources
Bickmore, T., Gruber, A., & Picard, R. (October, 2005). Establishing the computer-patient working alliance in automated health behavior change interventions. Patient Education Counseling. Abstract retrieved 4/19/2018 from https://www.researchgate.net/publication/7567340_Establishing_the_computer-patient_working_alliance_in_automated_health_behavior_change_interventions.
Burns, D. (1980). Feeling good: The new mood therapy. New York: Harper Collins.
Burns, D. (2006). When panic attacks. New York: Harmony.
Dweck, C. (December, 2014). The power of believing that you can improve. TED Talks. (Video, 10:20.) Retrieved 4/19/2018 from https://www.ted.com/speakers/carol_dweck.
Fitzpatrick, K.K., Darcy, A., & Vierhile, M. (April-June, 2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health. Retrieved 4/19/2018 from http://mental.jmir.org/2017/2/e19/ DOI: 10.2196/mental.7785 PMID: 28588005 PMCID: 5478797.
Holmes, L. (1/14/2015). 19 statistics that prove mental illness is more prominent than you think. Wellness. Retrieved 4/19/2018 from https://www.huffingtonpost.com/2014/12/01/mental-illness-statistics_n_6193660.html.
Hunt, J., & Eisenberg, D. (January, 2010). Mental health problems and help-seeking behavior among college students. Journal of Adolescent Health.
Kerr, M. (3/29/2012). Depression in college students: Signs, causes, and statistics. Healthline. Retrieved 4/19/2018 from https://www.healthline.com/health/depression/college-students#1.
Kessler, R.C., et al. (July, 2007). Age of onset of mental disorders: a review of recent literature. Current Opinion in Psychiatry.
Markoff, J. (3/23/2008). Joseph Weizenbaum, famed programmer, is dead at 85. The New York Times. Retrieved 4/19/2018 from https://www.nytimes.com/2008/03/13/world/europe/13weizenbaum.html.
Tartakovsky, M. (7/17/2016). Depression and anxiety among college students. PsychCentral. Retrieved 4/19/2018 from https://psychcentral.com/lib/depression-and-anxiety-among-college-students/.
Towery, J. (2016). The anti-depressant book: A practical guide for teens and young adults to overcome depression and stay healthy. Palo Alto, CA: Jacob Towery.
Woebot (n.d.). Woebot. Retrieved 4/15/2018 from https://www.woebot.io/.
Zivin, K., et al. (10/1/2009). Persistence of mental health problems and needs in a college student population. Journal of Affective Disorders.

Saturday, January 16, 2016

Extending the Learning Environment: Virtual Professors in Education

A technology resource article by  © 2005

For those of you interested in artificial intelligence development, here is an archive copy of a presentation I gave in 2005 (I'm consolidating my online contributions!)



Extending the Learning Environment: 
Virtual Professors in Education

By Mary Harrsch
Network & Management Information Systems
College of Education, University of Oregon
[2005]

Six years ago [1999], my sister was telling me about a fascinating History Alive Chautauqua event she had attended near Hutchinson, Kansas.  The program brings a reenactor portraying an historical figure into schools and communities for an educational presentation and question and answer session.  I thought to myself, “It’s too bad more people can’t take advantage of such a unique learning experience.”  Then, the technologist within me began to wonder if there was a way to create a virtual Chautauqua experience online.  As I pondered this possibility, I realized that if I could find software that could be used to create a “virtual” person online, I could not only recreate the experience of the Chautauqua, but provide a tool faculty could use to answer course-specific questions.  It could even be used to provide information about the professor’s personal interests and research to enhance the sense of community within the learning environment.

My quest led me to a website that included links to a number of different software agent projects.  I learned that the type of agent I needed was commonly called a “chatterbot”.  The first “chatterbot” was actually developed long before the personal computer.  In the early 1960s, Joseph Weizenbaum created “Eliza”, a virtual psychoanalyst.

In his efforts to create a natural language agent, Weizenbaum pointed out that he had to address the technical issues of:

  • the identification of key words,
  • the discovery of minimal context,
  • generation of responses in the absence of keywords

As I began to explore different agent implementations, I found that, in addition to these issues, the application needed to be able to prioritize keywords to discern the most appropriate response.  Several agents I evaluated, including Sylvie, a desktop assistant, developed by Dr. Michael("Fuzzy") Mauldin, Artificial Life’s Web Guide , Carabot 500 developed by U.K. company, Colorzone,  and Kiwilogic’s Linguibot (now Artificial Solutions, Inc.), used slightly different methods to set the priority of subject keywords to select the most appropriate responses.  The response with matching keywords under the subject with the highest level setting was “fired” – displayed to the user.  However, when editing their script files, I found keeping track of subject priorities was challenging.

Another problem with many script-driven agents I evaluated was the use of left-to-right parsing sequences that did not compensate for a variance in the order of keywords in a question. Each query had to be evaluated for subject and for matching character strings, based on left-to-right word order with the use of various “wildcard” characters to indicate placement of keywords within the overall question.  Therefore, you often had to have multiple script entries to compensate for different word order.  For example, if a student asks “How do I change my password in e-mail?” you would need one script entry. If the student asks “How do I change my e-mail password?” a different script entry would be required:

* email * * password * as well as
* password * * email * to trap for either wording.

Although this attention to script design resulted in improved response accuracy the scripting knowledge required for these agents was not something I would expect a faculty member to have the time or desire to learn.

A third problem with several of the agent applications I used was the necessity to unload and reload the agent each time the script file was edited.  If students were actively querying the agent, you could not save any script changes until the script file was no longer in use.

When I invested in the Enterprise Edition of Artificial Life’s WebGuide software, I also realized the importance of a logging feature that I could use to study and improve my guide’s responses.   I recognized the importance in a virtual tutoring environment of having the ability for a student to print out a transcript of their tutoring session for future study.  Not only was this feature absent in the agents I evaluated, but the responses produced using Javascript or Flash would not allow the user to highlight and copy responses to the clipboard either.

One day, I explored UltraHal Representative, developed by Zabaware, Inc. I liked the ability Ultrahal provided to program the agent through a web interface.  It had the capability to include links to related information.  It could be customized with personalized graphics.  It logged interactions.  Best of all, it had a straightforward approach to editing - no scripting – just type your question three different ways then type your intended response. 

But, I soon discovered that without the ability to identify keyword priority, I found that the results using whatever algorithm was built into the agent engine were too inaccurate for a virtual tutoring application.

I needed a product that could be programmed to be “omniscient”. 

“Effective ITS require virtual omniscience -- a relatively complete mastery of the subject area they are to tutor, including an understanding of likely student misconceptions.” (McArthur, Lewis, and Bishay, 1993)

I needed a virtual professor that could be “programmed” by real professors, the individuals who would have a mastery of the subject and an understanding of student misconceptions. But all of the chatterbots I had encountered, so far (with the exception of Ultra Hal), required knowledge of scripting that most faculty members do not have the time to learn.  I would not have the time to provide one-on-one time with faculty developers and paying a programmer to work with a faculty member is also too expensive.  (I noticed most developers of commercial agents actually relied on the scripting needs of their clients for their primary revenue stream.)  So, I decided to attempt a radically different approach to agent design.

I am an experienced Filemaker Pro solutions developer and one day I was reviewing some of Filemaker’s text functions and realized that the position function could be used to detect key words in a text string.  The beauty of the position function is that the keyword can be identified anywhere within the target text.  It is not dependent on a left to right orientation.  Filemaker is also not case sensitive.  Also, Filemaker Pro allows most scripting calls for text processing to be used with their Instant Web Publishing interface. I realized this would greatly simplify web integration.

So, reviewing my experiences with the agent applications I had used, I developed a list of features that I wanted to incorporate:

Web functionality:
Multiple agents controlled by a single administration console
Web-based query interface
Web-based editing interface
Multiple graphic format support
Web accessible logging function for both agent editor and student user
Ability to display related resources

Query processing functionality:
Question context awareness (who, what, when, where, why, how, etc)
Ability to weight keywords by importance without user scripting
Ability to return an alternate response if a question is asked more than once
Ability to use one response for different questions
Ability to process synonyms, international spelling differences, and misspellings
Independent of word order
Not case sensitive

Structural Design:
Modular design to enable importation of knowledge modules developed by others
Agent specific attributes to customize the interface and responses such as a personal greeting, the opportunity to use the person’s homepage as a default URL, information about area of expertise and research interests for alternative agent selection criteria, custom visual representations, etc.

I began by designing my response search criteria.  I programmed the agent search routine to categorize responses by the first word of the query – usually What, Where, Why, How, Who, Did, Can, etc. to establish the question context. Then I used position formulas to test for the presence of keywords.  I then developed an algorithm that weighted the primary keyword or synonym and totaled the number of keywords found in each record.

I designed the search function so that when the visitor presses the button to ask their question, the database first finds all responses for the question category (who, what, when, etc.) containing the primary keyword (or its synonym).  Responses are then sorted in descending order by the total sum of keywords present in each response.   The first record – the one with the most keyword matches – is displayed as the answer. 

If there are no category responses containing the primary keyword, then a second find will execute to look for all responses with the keyword regardless of category.  In working with other agent products, I have found that if you return a response with at least some information about the keyword, even if it is not an exact answer to the question, the student assumes the agent recognized their question and may learn auxiliary information that is still helpful to them.

For example, if a visitor asks my virtual Julius Caesar if he really loved Cleopatra, he will answer “Cleopatra…ah, what an intriguing woman.”  Not only is this more in character with Caesar (most of his female dalliances were for political reasons) but the answer could also be appropriate for a different question, “What did you think of Cleopatra?”  My search routine would find it in either case because of the weighting of the primary keyword, Cleopatra.

If there are no responses containing the primary keyword, a third find looks for any generic category responses.  For example, if a student asks who someone is and you have not programmed your agent with a specific answer for the keyword (the person they are asking about), the agent will reply with an appropriate “who” response such as “I’m afraid I’ve never made their acquaintance.” 

If a student’s question does not begin with any words set as category words, the last find will return a generic “what” response such as “I may be a fountain of knowledge, but I can’t be expected to know everything.”  Programming the agent with default generic responses, ensures that the agent always has something to say, even if it knows nothing about the subject.  I developed a small database of generic responses for each question category that is imported into an agent database each time a new agent is created.  The faculty member can go through the responses and edit them if they wish.
Next, I turned my attention to the faculty’s content editing interface.  I wanted the faculty member to enter a proposed question, designate a primary keyword and synonym, supply any other keywords they thought were important to identify more precisely the desired response, and the desired response.  

I also provided a button that enables a faculty member to quickly generate a different answer for the same question or a different question for the same response.  

I created a field that is populated with a different random integer on each search.  By subsorting responses by this random integer, it enables the agent to offer a different response to the same question if the question is asked more than once.  This supports the illusion of the agent being a “real” person because it will not necessarily return the same identical response each time. 

“Believable agents must be reactive and robust, and their behaviors must decay gracefully. They must also be variable, avoiding repetitive actions even when such actions would be appropriate for a rational agent. They must exhibit emotion and personality. Ultimately they must be able to interact with users over extended periods of time and in complex environments while rarely exhibiting implausible behaviors.” – Dr. Patrick Doyle, Believability through Context: Using “knowledge in the world” to create intelligent characters

With the “engine” of my agent developed, I turned my attention to the visual representation of the character.  In their paper, The Relationship Between Visual Abstraction and the Effectiveness of a Pedagogical Character-Agent, Hanadi Haddad and Jane Klobas of Curtin University of Technology, Perth, Western Australia, point out the divergent views of information systems designers outside the character-agent field with those developers within it.

Wilson (1997) suggests that more realistic character-agents may introduce distraction associated with the user’s curiosity about the personality of the character and overreading of unintended messages because of presentation complexity.”

Unlike detailed realistic drawings, sketches help focus the mind on what is important, leaving out or vaguely hinting at other aspects. Sketches promote the participation of the viewer. People give more, and more relevant, comments when they are presented a sketch than when they are given a detailed drawing. A realistic drawing or rendering looks too finished and draws attention to details rather then the conceptual whole (Stappers et al, 2000).

“On the other hand, research by psychologists suggests that people may put considerable cognitive effort into processing abstract representations of faces (Bruce et al. 1992; Hay & Young 1982). It is possible, therefore, that response to anthropomorphised character-agents, and especially their faces, may differ from responses to sketches. Gregory and his colleagues (1995) conducted studies on human response to faces at the physiological level. They demonstrated that humans are particularly receptive to faces. In terms of recognition, participants in their studies were more responsive to real faces than to abstracted line faces. They speculated, however, that people spend longer studying abstracted line faces and may find them more interesting (Gregory et al. 1995). If this is so, then contrary to theories of information design, an abstract face may introduce more distraction into the communication than a realistic face.”

Filemaker Pro provides multimedia container fields that enable me to include still images, animations, or even video clips.  However, not only is creating a unique graphic for each response time consuming, motion video files can be quite large and slow down the delivery of response information over the web.  Working with other agents, I had noticed that just the slight eye movement of a blink can be enough to reinforce the illusion of a sense of presence. This approach straddles the two opposing theories described above.  I would utilize a real face to capitalize on the human receptivity to a real face but keep animation to a minimum to reduce distraction.  I also think the use of a real faculty person’s face serves to reinforce the bond between the instructor and the student. A blink is also very easy to create from any faculty portrait.

I use an inexpensive animation tool called Animagic GIF Animator.  I begin with a portrait of the faculty member.  I open it in Photoshop (any image editor would suffice) and, after sampling the color of the skin above the eye, I paint over the open eye.  Then I open an unedited copy of the portrait in Animagic, insert a frame and select the edited version of the portrait.  I then set the open eye frame to repeat about 250 times and the closed eye frame to repeat once.  Then loop the animation.

I created a related table that stores all unique information about each agent including their default image, their default greeting, their login password, their area of expertise, their email address and their homepage URL. I also developed a collection of alternate avatars to use for agent images in case some faculty were camera-shy.  These were created with Poser using their ethnic character library.

Finally, I designed the login screen where the student selects the tutor to whom they wish to converse.  Upon selecting the tutor and pressing the button “Begin Conversation”, the student is presented with the query screen including the individual greeting for the tutor selected.  

I also provided a button for the faculty to use to login to edit their agent.  It takes them to a layout that prompts them for a name a password. 

Famed World War II cryptographer, Alan Turing, held that computers would, in time, be programmed to acquire abilities rivaling human intelligence.

Alan Turing at age 16.
“As part of his argument Turing put forward the idea of an 'imitation game', in which a human being and a computer would be interrogated under conditions where the interrogator would not know which was which, the communication being entirely by textual messages. Turing argued that if the interrogator could not distinguish them by questioning, then it would be unreasonable not to call the computer intelligent.” – The Alan Turing Internet Scrapbook 


My virtual professor may not be as sophisticated as agents that have been developed to pass the Turing Test but I hope I have provided a framework for the development of a rigorous inquiry-based learning system.

“Effective inquiry is more than just asking questions. A complex process is involved when individuals attempt to convert information and data into useful knowledge. Useful application of inquiry learning involves several factors: a context for questions, a framework for questions, a focus for questions, and different levels of questions. Well-designed inquiry learning produces knowledge formation that can be widely applied.” - Thirteen Ed Online.

References:

McArthur, David, Matthew Lewis, and Miriam Bishay. "The Roles of Artificial Intelligence in Education: Current Progress and Future Prospects".  1993.  Rand. <http://www.rand.org/education/mcarthur/Papers/role.html#anatomy >.
Doyle, Patrick. "Believability through Context Using "Knowledge in the World" to Create Intelligent Characters." Trans. SIGART: ACM Special Interest Group on Artificial Intelligence. International Conference on Autonomous Agents. Session 2C ed. Bologna, Italy: ACM Press    New York, NY, USA, 2002. 342 - 49 of Life-like and believable qualities.
Haddad, Hanadi, and Jane Klobas. "The Relationship between Visual Abstraction and the Effectiveness of a Pedagogical Character-Agent." The First International Joint Conference on Autonomous Agents & Multi-Agent Systems. Bologna, Italy, 2002.

Wilson, M. "Metaphor to Personality: The Role of Animation in Intelligent Interface Agents." Animated Interface Agents: Making them Intelligent  in conjunction with International Joint Conference on Autonomous Agents. Nagoya, Japan, 1997.

Stappers, P., Keller, I. & Hoeben, A. 2000, ‘Aesthetics, interaction, and usability in
 ‘sketchy’ design tools’, Exchange Online Journal, issue 1, December, [Online],
[2004, August 3].

Bruce, V., Cowey, A., Ellis, A. W. & Perrett, D. L. 1992, Processing the Facial Image.
 Oxford, UK, Clarendon Press.

Hay, D.C., Young, A.W. 1982, ‘The human face’, in Normality and Patholgy in
 Cognitive Function, Ellis, A.W. ed., London, Academic Press, pp. 173-202.

Gregory, R., Harris, J., Heard, P. & Rose, D. (eds) 1995, The Artful Eye, Oxford
 University Press,Oxford.

"Thirteen Ed Online: Concept to Classroom".  2004.  Educational Broadcasting Corporation. 8/9/04 2004. <http://www.thirteen.org/edonline/concept2class/ >.

Hodges, Dr. Andrew. "The Alan Turing Internet Scrapbook".  Oxford, 2004.  (3/15/2004):  University of Oxford. 8/09/04 2004. <http://www.turing.org.uk/turing/scrapbook/test.html >.

Sunday, February 06, 2011

Virtual Professors using Conversational Agent Software the Answer for the 3rd Dimension in Online Learning

"Developing that best-in-the-world online course — in which students would learn as much, or more, than in an ordinary classroom or a hybrid online class — requires significant investment. The Open Learning Initiative at Carnegie Mellon University, which has developed about 15 sophisticated online courses, mostly in the sciences, spent $500,000 to $1 million to write software for each. But neither Carnegie Mellon nor other institutions, which are invited to use its online courses, dares to use them without having a human instructor, too..."

"...But even when lectures are accompanied with syllabuses, handouts, sample problem sets and other aids that Academic Earth has for some of its courses, is the experience really complete? The Massachusetts Institute of Technology also shares the raw materials of courses in its OpenCourseWare program. For the benefit of autodidacts who aren’t M.I.T. students, it strives to publish materials online for every M.I.T. course. But students cannot interact and do not receive vital feedback about their own progress that an instructor or software provides."- Online Courses, Still Lacking That Third Dimension, Randall Stross, The New York Times

Way back in 1995, I became intrigued with developing conversational agents using software that was the descendant of "Eliza", software written at MIT by Joseph Weizenbaum between 1964 to 1966 to simulate a a Rogerian psychotherapist.  I wrote a script and adapted images of a bust of Julius Caesar to create an online "virtual" Julius Caesar that a web visitor could converse with and ask whatever they wanted to know about Caesar's life and times.
Head of Julius Caesar from
Trajan's Forum now in the
National Archaeological Museum
in Naples, Italy.

I received e-mail from history teachers across the United States who actually started using my "virtual" Caesar in their classes and found it to be a dynamic learning tool that kept kids intrigued.

Then I tried to convince professors I worked with to consider letting me develop a "virtual" professor for each one to provide online office hours 24/7 for each of their courses. After all, professors, especially those that have taught the same class for years, obviously had a wealth of answers to course FAQs.  To make the agent more interesting, I explained to the professor that we needed to try to impart each professor's unique personality into the agent so conversing online with the agent would feel like talking with the real professor for the project to be a success.  For example, one professor enjoyed sea kayaking.  I told him that I would like him to talk about sea kayaking with me.  I also liked to include answers to questions about favorite books, movies, food, etc.

But, although I got a couple of professors intrigued, they were always too busy to spend the quality time that is needed to produce a truly convincing agent.  Maybe if institutions would consider paying instructors royalties for the use of their knowledge in the development of "virtual professors", more progress could be made in the production of such online learning environments.