Saturday, January 16, 2016

Extending the Learning Environment: Virtual Professors in Education

A technology resource article by  © 2005

For those of you interested in artificial intelligence development, here is an archive copy of a presentation I gave in 2005 (I'm consolidating my online contributions!)



Extending the Learning Environment: 
Virtual Professors in Education

By Mary Harrsch
Network & Management Information Systems
College of Education, University of Oregon
[2005]

Six years ago [1999], my sister was telling me about a fascinating History Alive Chautauqua event she had attended near Hutchinson, Kansas.  The program brings a reenactor portraying an historical figure into schools and communities for an educational presentation and question and answer session.  I thought to myself, “It’s too bad more people can’t take advantage of such a unique learning experience.”  Then, the technologist within me began to wonder if there was a way to create a virtual Chautauqua experience online.  As I pondered this possibility, I realized that if I could find software that could be used to create a “virtual” person online, I could not only recreate the experience of the Chautauqua, but provide a tool faculty could use to answer course-specific questions.  It could even be used to provide information about the professor’s personal interests and research to enhance the sense of community within the learning environment.

My quest led me to a website that included links to a number of different software agent projects.  I learned that the type of agent I needed was commonly called a “chatterbot”.  The first “chatterbot” was actually developed long before the personal computer.  In the early 1960s, Joseph Weizenbaum created “Eliza”, a virtual psychoanalyst.

In his efforts to create a natural language agent, Weizenbaum pointed out that he had to address the technical issues of:

  • the identification of key words,
  • the discovery of minimal context,
  • generation of responses in the absence of keywords

As I began to explore different agent implementations, I found that, in addition to these issues, the application needed to be able to prioritize keywords to discern the most appropriate response.  Several agents I evaluated, including Sylvie, a desktop assistant, developed by Dr. Michael ("Fuzzy") Mauldin, Artificial Life’s Web Guide , Carabot 500 developed by U.K. company, Colorzone,  and Kiwilogic’s Linguibot, used slightly different methods to set the priority of subject keywords to select the most appropriate responses.  The response with matching keywords under the subject with the highest level setting was “fired” – displayed to the user.  However, when editing their script files, I found keeping track of subject priorities was challenging.

Another problem with many script-driven agents I evaluated was the use of left-to-right parsing sequences that did not compensate for a variance in the order of keywords in a question. Each query had to be evaluated for subject and for matching character strings, based on left-to-right word order with the use of various “wildcard” characters to indicate placement of keywords within the overall question.  Therefore, you often had to have multiple script entries to compensate for different word order.  For example, if a student asks “How do I change my password in e-mail?” you would need one script entry. If the student asks “How do I change my e-mail password?” a different script entry would be required:

* email * * password * as well as
* password * * email * to trap for either wording.

Although this attention to script design resulted in improved response accuracy the scripting knowledge required for these agents was not something I would expect a faculty member to have the time or desire to learn.

A third problem with several of the agent applications I used was the necessity to unload and reload the agent each time the script file was edited.  If students were actively querying the agent, you could not save any script changes until the script file was no longer in use.

When I invested in the Enterprise Edition of Artificial Life’s WebGuide software, I also realized the importance of a logging feature that I could use to study and improve my guide’s responses.   I recognized the importance in a virtual tutoring environment of having the ability for a student to print out a transcript of their tutoring session for future study.  Not only was this feature absent in the agents I evaluated, but the responses produced using Javascript or Flash would not allow the user to highlight and copy responses to the clipboard either.

One day, I explored UltraHal Representative, developed by Zabaware, Inc. I liked the ability Ultrahal provided to program the agent through a web interface.  It had the capability to include links to related information.  It could be customized with personalized graphics.  It logged interactions.  Best of all, it had a straightforward approach to editing - no scripting – just type your question three different ways then type your intended response. 

But, I soon discovered that without the ability to identify keyword priority, I found that the results using whatever algorithm was built into the agent engine were too inaccurate for a virtual tutoring application.

I needed a product that could be programmed to be “omniscient”. 

“Effective ITS require virtual omniscience -- a relatively complete mastery of the subject area they are to tutor, including an understanding of likely student misconceptions.” (McArthur, Lewis, and Bishay, 1993)

I needed a virtual professor that could be “programmed” by real professors, the individuals who would have a mastery of the subject and an understanding of student misconceptions. But all of the chatterbots I had encountered, so far (with the exception of Ultra Hal), required knowledge of scripting that most faculty members do not have the time to learn.  I would not have the time to provide one-on-one time with faculty developers and paying a programmer to work with a faculty member is also too expensive.  (I noticed most developers of commercial agents actually relied on the scripting needs of their clients for their primary revenue stream.)  So, I decided to attempt a radically different approach to agent design.

I am an experienced Filemaker Pro solutions developer and one day I was reviewing some of Filemaker’s text functions and realized that the position function could be used to detect key words in a text string.  The beauty of the position function is that the keyword can be identified anywhere within the target text.  It is not dependent on a left to right orientation.  Filemaker is also not case sensitive.  Also, the new version 7 allows most scripting calls for text processing to be used with their Instant Web Publishing interface. I realized this would greatly simplify web integration.

So, reviewing my experiences with the agent applications I had used, I developed a list of features that I wanted to incorporate:

Web functionality:
Multiple agents controlled by a single administration console
Web-based query interface
Web-based editing interface
Multiple graphic format support
Web accessible logging function for both agent editor and student user
Ability to display related resources

Query processing functionality:
Question context awareness (who, what, when, where, why, how, etc)
Ability to weight keywords by importance without user scripting
Ability to return an alternate response if a question is asked more than once
Ability to use one response for different questions
Ability to process synonyms, international spelling differences, and misspellings
Independent of word order
Not case sensitive

Structural Design:
Modular design to enable importation of knowledge modules developed by others
Agent specific attributes to customize the interface and responses such as a personal greeting, the opportunity to use the person’s homepage as a default URL, information about area of expertise and research interests for alternative agent selection criteria, custom visual representations, etc.

I began by designing my response search criteria.  I programmed the agent search routine to categorize responses by the first word of the query – usually What, Where, Why, How, Who, Did, Can, etc. to establish the question context. Then I used position formulas to test for the presence of keywords.  I then developed an algorithm that weighted the primary keyword or synonym and totaled the number of keywords found in each record.

I designed the search function so that when the visitor presses the button to ask their question, the database first finds all responses for the question category (who, what, when, etc.) containing the primary keyword (or its synonym).  Responses are then sorted in descending order by the total sum of keywords present in each response.   The first record – the one with the most keyword matches – is displayed as the answer. 

If there are no category responses containing the primary keyword, then a second find will execute to look for all responses with the keyword regardless of category.  In working with other agent products, I have found that if you return a response with at least some information about the keyword, even if it is not an exact answer to the question, the student assumes the agent recognized their question and may learn auxiliary information that is still helpful to them.

For example, if a visitor asks my virtual Julius Caesar if he really loved Cleopatra, he will answer “Cleopatra…ah, what an intriguing woman.”  Not only is this more in character with Caesar (most of his female dalliances were for political reasons) but the answer could also be appropriate for a different question, “What did you think of Cleopatra?”  My search routine would find it in either case because of the weighting of the primary keyword, Cleopatra.

If there are no responses containing the primary keyword, a third find looks for any generic category responses.  For example, if a student asks who someone is and you have not programmed your agent with a specific answer for the keyword (the person they are asking about), the agent will reply with an appropriate “who” response such as “I’m afraid I’ve never made their acquaintance.” 

If a student’s question does not begin with any words set as category words, the last find will return a generic “what” response such as “I may be a fountain of knowledge, but I can’t be expected to know everything.”  Programming the agent with default generic responses, ensures that the agent always has something to say, even if it knows nothing about the subject.  I developed a small database of generic responses for each question category that is imported into an agent database each time a new agent is created.  The faculty member can go through the responses and edit them if they wish.
Next, I turned my attention to the faculty’s content editing interface.  I wanted the faculty member to enter a proposed question, designate a primary keyword and synonym, supply any other keywords they thought were important to identify more precisely the desired response, and the desired response.  

I also provided a button that enables a faculty member to quickly generate a different answer for the same question or a different question for the same response.  

I created a field that is populated with a different random integer on each search.  By subsorting responses by this random integer, it enables the agent to offer a different response to the same question if the question is asked more than once.  This supports the illusion of the agent being a “real” person because it will not necessarily return the same identical response each time. 

“Believable agents must be reactive and robust, and their behaviors must decay gracefully. They must also be variable, avoiding repetitive actions even when such actions would be appropriate for a rational agent. They must exhibit emotion and personality. Ultimately they must be able to interact with users over extended periods of time and in complex environments while rarely exhibiting implausible behaviors.” – Dr. Patrick Doyle, Believability through Context: Using “knowledge in the world” to create intelligent characters

With the “engine” of my agent developed, I turned my attention to the visual representation of the character.  In their paper, The Relationship Between Visual Abstraction and the Effectiveness of a Pedagogical Character-Agent, Hanadi Haddad and Jane Klobas of Curtin University of Technology, Perth, Western Australia, point out the divergent views of information systems designers outside the character-agent field with those developers within it.

Wilson (1997) suggests that more realistic character-agents may introduce distraction associated with the user’s curiosity about the personality of the character and overreading of unintended messages because of presentation complexity.”

Unlike detailed realistic drawings, sketches help focus the mind on what is important, leaving out or vaguely hinting at other aspects. Sketches promote the participation of the viewer. People give more, and more relevant, comments when they are presented a sketch than when they are given a detailed drawing. A realistic drawing or rendering looks too finished and draws attention to details rather then the conceptual whole (Stappers et al, 2000).

“On the other hand, research by psychologists suggests that people may put considerable cognitive effort into processing abstract representations of faces (Bruce et al. 1992; Hay & Young 1982). It is possible, therefore, that response to anthropomorphised character-agents, and especially their faces, may differ from responses to sketches. Gregory and his colleagues (1995) conducted studies on human response to faces at the physiological level. They demonstrated that humans are particularly receptive to faces. In terms of recognition, participants in their studies were more responsive to real faces than to abstracted line faces. They speculated, however, that people spend longer studying abstracted line faces and may find them more interesting (Gregory et al. 1995). If this is so, then contrary to theories of information design, an abstract face may introduce more distraction into the communication than a realistic face.”

Filemaker Pro 7 provides multimedia container fields that enable me to include still images, animations, or even video clips.  However, not only is creating a unique graphic for each response time consuming, motion video files can be quite large and slow down the delivery of response information over the web.  Working with other agents, I had noticed that just the slight eye movement of a blink can be enough to reinforce the illusion of a sense of presence. This approach straddles the two opposing theories described above.  I would utilize a real face to capitalize on the human receptivity to a real face but keep animation to a minimum to reduce distraction.  I also think the use of a real faculty person’s face serves to reinforce the bond between the instructor and the student. A blink is also very easy to create from any faculty portrait.

I use an inexpensive animation tool called Animagic GIF Animator.  I begin with a portrait of the faculty member.  I open it in Photoshop (any image editor would suffice) and, after sampling the color of the skin above the eye, I paint over the open eye.  Then I open an unedited copy of the portrait in Animagic, insert a frame and select the edited version of the portrait.  I then set the open eye frame to repeat about 250 times and the closed eye frame to repeat once.  Then loop the animation.

I created a related table that stores all unique information about each agent including their default image, their default greeting, their login password, their area of expertise, their email address and their homepage URL. I also developed a collection of alternate avatars to use for agent images in case some faculty were camera-shy.  These were created with Poser using their ethnic character library.

Finally, I designed the login screen where the student selects the tutor to whom they wish to converse.  Upon selecting the tutor and pressing the button “Begin Conversation”, the student is presented with the query screen including the individual greeting for the tutor selected.  

I also provided a button for the faculty to use to login to edit their agent.  It takes them to a layout that prompts them for a name a password. 

Famed World War II cryptographer, Alan Turing, held that computers would, in time, be programmed to acquire abilities rivaling human intelligence.

Alan Turing at age 16.
“As part of his argument Turing put forward the idea of an 'imitation game', in which a human being and a computer would be interrogated under conditions where the interrogator would not know which was which, the communication being entirely by textual messages. Turing argued that if the interrogator could not distinguish them by questioning, then it would be unreasonable not to call the computer intelligent.” – The Alan Turing Internet Scrapbook 


My virtual professor may not be as sophisticated as agents that have been developed to pass the Turing Test but I hope I have provided a framework for the development of a rigorous inquiry-based learning system.

“Effective inquiry is more than just asking questions. A complex process is involved when individuals attempt to convert information and data into useful knowledge. Useful application of inquiry learning involves several factors: a context for questions, a framework for questions, a focus for questions, and different levels of questions. Well-designed inquiry learning produces knowledge formation that can be widely applied.” - Thirteen Ed Online.

References:

McArthur, David, Matthew Lewis, and Miriam Bishay. "The Roles of Artificial Intelligence in Education: Current Progress and Future Prospects".  1993.  Rand. <http://www.rand.org/education/mcarthur/Papers/role.html#anatomy >.
Doyle, Patrick. "Believability through Context Using "Knowledge in the World" to Create Intelligent Characters." Trans. SIGART: ACM Special Interest Group on Artificial Intelligence. International Conference on Autonomous Agents. Session 2C ed. Bologna, Italy: ACM Press    New York, NY, USA, 2002. 342 - 49 of Life-like and believable qualities.
Haddad, Hanadi, and Jane Klobas. "The Relationship between Visual Abstraction and the Effectiveness of a Pedagogical Character-Agent." The First International Joint Conference on Autonomous Agents & Multi-Agent Systems. Bologna, Italy, 2002.

Wilson, M. "Metaphor to Personality: The Role of Animation in Intelligent Interface Agents." Animated Interface Agents: Making them Intelligent  in conjunction with International Joint Conference on Autonomous Agents. Nagoya, Japan, 1997.

Stappers, P., Keller, I. & Hoeben, A. 2000, ‘Aesthetics, interaction, and usability in
 ‘sketchy’ design tools’, Exchange Online Journal, issue 1, December, [Online],
[2004, August 3].

Bruce, V., Cowey, A., Ellis, A. W. & Perrett, D. L. 1992, Processing the Facial Image.
 Oxford, UK, Clarendon Press.

Hay, D.C., Young, A.W. 1982, ‘The human face’, in Normality and Patholgy in
 Cognitive Function, Ellis, A.W. ed., London, Academic Press, pp. 173-202.

Gregory, R., Harris, J., Heard, P. & Rose, D. (eds) 1995, The Artful Eye, Oxford
 University Press,Oxford.

"Thirteen Ed Online: Concept to Classroom".  2004.  Educational Broadcasting Corporation. 8/9/04 2004. <http://www.thirteen.org/edonline/concept2class/ >.

Hodges, Dr. Andrew. "The Alan Turing Internet Scrapbook".  Oxford, 2004.  (3/15/2004):  University of Oxford. 8/09/04 2004. <http://www.turing.org.uk/turing/scrapbook/test.html >.

Monday, January 06, 2014

Roku services far superior to TV Manufacturers' Built-in Apps



A technology resource article by  © 2014

This morning I read an article in the New York Times proclaiming that TV manufacturers are counting on smart TVs to boost lackluster sales.  I doubt seriously that adding internet connectivity directly to each TV is the silver bullet they're all looking for with all of the current "smart" options consumers already have.

I was one of the early adopters of Roku's video streaming device back in 2008.  I used it successfully with only a standard DSL internet connection on a Mitsubishi projection TV that was over ten years old.  Then four years ago, my husband and I purchased a 3-D Samsung Smart TV after being intrigued by James Cameron's "Avatar" the year before and deciding it was finally time to make the next leap to HD TV.  It came bundled with a 3D Smart Blu-Ray player as well.  I had also purchased a "less" smart Samsung HDTV for my office that could download files from my remote PC but could not talk to the internet (a Black Friday special at the time).So, I moved my Roku player to my office to make the TV there internet "smart" and used the applications on the smart Blu-Ray player in the living room to watch Netflix there.

But, my husband, who has become progressively more and more hard of hearing, became frustrated with watching Netflix streaming movies without the ability to turn on subtitles to serve as closed captioning like he does with the Netflix DVDs that we get by mail.  I thought Netflix just didn't provide subtitles with their streaming services.  Then one day I was reading an article that pointed out the Roku device's ability to provide subtitles with its streaming content.  I had not realized the absence of subtitles was a factor of the application you were using, not Netflix itself. Furthermore, I had purchased some DVDs from Amazon and was offered the ability to watch a digital copy until my DVD arrived using Amazon's Instant Video service but neither my "smart" TV or my "smart" Blu-Ray player offered an application for Amazon Instant Video.

During the holidays in 2012, Roku offered their latest HD streaming device on sale so I purchased one. I registered my new Roku device with Netflix and read up on how to set subtitles up on the Roku.  Then I connected to Netflix and selected a movie to try it out.  Voila!  Subtitles appeared just like they do when you select subtitles for a DVD!  I also now had access to Amazon Instant Video and a wealth of other channels including the History Channel's new online offerings.

I surmised from this experience that TV manufacturers view apps as secondary and don't have the interest or resources dedicated to improving and/or updating their "homegrown" applications.  But the streaming experience is the sole reason Roku exists, so the folks at Roku are constantly working on adding new features and more content.

The NYT article did mention an alliance between Roku and several TV manufacturers that will enable TVs to come equipped with a Roku embedded application.  As long as Roku is managing the features and content, this could work well but it is hardly a reason for anyone to buy a new HD TV if they already have one as long as Roku, Chromecast, Apple TV and other internet enabled devices are available for less than $100 (or, in many cases, less than $50).

Enhanced by Zemanta

Tuesday, September 03, 2013

Saving money with VUDU At Home

Vudu, Inc.
Vudu, Inc. (Photo credit: Wikipedia)
A technology resource article about the VUDU At Home streaming media application by  ©2013

I adopted VUDU as my movie collection cloud service last year with my fingers crossed that it would remain viable, especially as a subsidiary of retailing giant Walmart.  A few months ago I received an e-mail from VUDU asking me to try their new VUDU At Home application that would let me use my own computer to scan my DVDs and license them for my digital cloud collection.  The kicker was that if I scanned at least 10 movies in a session, I would only have to pay 50% of the digital conversion fee.  I found this a really attractive offer since I had an extensive library of standard definition DVDs that cost me $5 each to convert to a high quality streaming digital copy if I took them to Walmart for verification.

So, I downloaded the app onto my HP 64-bit Windows 7 workstation, collected my stack of DVDs that I had verified were available for Disc to Digital conversion, and prepared to scan my first disc.  Unfortunately, regardless which disc I tried, I received the error that the application could not read the disc.  That was not a good start.

I reported the problem to VUDU and they had me submit a trouble ticket that soon disappeared into the dark hole of a tech support queue, never to be seen or discussed again.  I wasn't willing to give up this opportunity to save a substantial amount of money though, so I thought I would try the app on my husband's Lenovo laptop.  Voila, it installed and functioned without a hitch and I was soon scanning dozens of my DVDs and converting them to HDX for only $2.50. (I wish I had a Blu-Ray drive on that laptop.  Then, I would only have to pay $1 for my Blu Ray discs!)

I only ran into a few DVDs that could not be read and my research on the VUDU At Home discussion forum revealed that there were some titles that were apparently missing from the GraceNote database that was being used by VUDU to verify discs.  I reported the titles and their UPC codes to GraceNote as well as VUDU.

日本語: Wm-license-own-work
Gracenote log  (Photo credit: Wikipedia)
I received a canned reply from GraceNote telling me I should report the problem to VUDU and essentially brushing me off. But I sent back a reply explaining that the problem had already been reported to VUDU and I was providing the metadata to GraceNote so they could amend their database.  (After all, I used to be a database designer!) Then I received an email back from GraceNote telling me that VUDU had only bought certain versions of their metadata and that it was VUDU's problem not GraceNote's - typical inter-company tech support finger pointing.

Oh well, at least I saved by my estimate about $320 in digital conversion fees using the VUDU at home application.  I would encourage VUDU to keep the discount program in effect after the end of the public beta as it serves as the carrot to get people to scan their own discs as opposed to running down to Walmart with them where many of the photo center staff have little or no experience with the VUDU verification process and often take 30 - 45 minutes to certify the discs then stamp them to deter people from sharing their discs with their friends and neighbors for their VUDU registration.

One other observation about VUDU's digital delivery system.  Lately, I always check VUDU's direct purchase price and compare it with Amazon's every time I want to purchase a movie.  With the exception of 3D movies (which for some reason are never offered to consumers as a digital copy despite the fact that VUDU will rent and can stream 3D movies), I now see no reason to buy a physical disc unless it is significantly cheaper.  I do wish the digital version was less expensive, though, as it should be, instead of almost equal or equal to the same price as a physical disc.  I realize this is probably the result of licensing contracts with the studios but it hardly takes into account the cost savings of not having to physically produce and distribute a disc.  The movie studios must have taken lessons from the e-book sellers who think they should be paid as much for a download file as a physical hard copy book.  This type of fattening the profit margin only leads to more people willing to purchase pirated merchandise in the long run as the RIAA discovered in the music industry.  If digital-only versions of films were priced at $9.99 or less, you'd see a lot more people buying digital in the first place than scrabbling through bargain bins or searching the secondary market for their favorites.

Related articles

Enhanced by Zemanta

Wednesday, September 12, 2012

Adaptive advertising invites depressive stereotyping

English: Detail of a New York Times Advertisem...
Detail of a New York Times Advertisement - 1895  (Photo credit: Wikipedia)
"Data-driven discovery is tech's new wave" touts a recent article in the New York Times. The article points out that developments in computing power coupled with inexpensive data storage has produced a digital "boiling point" that will enable companies to begin surgically targeting consumer groups based on incisive analysis of web browsing "trails" and age, gender and interests profiles using machine-learning algorithms.  One company they mention is  Rocket Fuela four-year-old Silicon Valley start-up that uses artificial-intelligence software to place display advertisements for marketers on the Web .  So I visited the company's website to see how they describe the service they offer to their own clients.

Rocket Fuel points out that they have defined over 20,000 audience segments that I assume are applied to vast numbers of potential client customers.  I was hoping they would define a retirees segment online so I could see how much of their profile applied to me.  Unfortunately, they didn't detail that demographic group but they did describe others with which I share some attributes.  Here are those segments with my take on their validity based on my own preferences.

Gadget Geeks technology early adopters who are passionate about gadgets
  • Passionate about technology and gadgets - yes
  • Enjoy sharing tech expertise with family and friends - sometimes (I don't like being tech support for friends and family - after all I used to do that for a living and retired to get away from it!)
  • Prioritize quality and brand when shopping - as long as I get bang for the buck
  • Interested in researching and buying the newest gadgets - research yes, buying - not usually a "dot.zero" release and not unless the gadget offers perceived value based on my needs
Leisure Travelers love to travel for pleasure and frequently hunt for travel deals
  • Passionate about travel and travel deals - yes if in my preferred travel area and not a cruise
  • Frequent fliers - yes
  • Enjoy researching about travel online - yes
Not bad so far but how about less niche-oriented segments?

Value Shoppers are budget-conscious shoppers seeking value and quality
  • Research online for deals and coupons - only if I have a product already selected
  • Interested in sweepstakes and contests - no
  • Primary grocery decision-maker, coupon clips, bargain hunts - yes, sometimes and only if bargain information is delivered to me (email) or readily available without extensive research
Moms-on-the-Go are career-oriented and thrive on time-saving products
  • Socially active and web savvy - yes
  • Interested in products that allow more time with family - more leisure time
  • Enjoy dining at family restaurants and steakhouses - no; prefer international cuisine
  • Frequently buy quick-fix meals and time-saving products - no box meals; prefer freshly combined ingredients at a deli or takeout
  • Altruistic, responsible, and creative psychographics - usually
It's obvious when they try to stretch their profiles over a much larger segment, discretionary preferences become more of an issue.

There is currently no consensus on how closely...
 (Photo credit: Wikipedia)
I'm not saying this development is necessarily bad.  I am the first to admit that I really appreciate the algorithm Netflix has developed to recommend movies to me based on my viewing history and expressed ratings.  Their recommendations hit the mark more and more often.  But I have rated over 1,000 movies on Netflix.  I usually only buy a car once every ten years or so and I have no particular brand loyalty.  Furthermore, I am in no financial position to "surprise" my spouse with a new Lexus at Christmas time either! (How I hate those commercials during the holidays - I feel they have been particularly tasteless during this economic recession!!)  Not that I would consider spending that much for mere transportation a worthwhile investment anyway!

As an older consumer, what I fear most is that the media I watch will become depressingly saturated with what Rocket Fuel delicately describes as "senior products".  It already seems like I hear about nothing but incontinence products, sexual dysfunction, medications for all kinds of diseases that befall an aging body, hair loss, wrinkle creams and face lifts, Alzheimer's care centers and estate planning.  It makes me wonder if DISH network has already begun a campaign of adaptive advertising.  Maybe it's just because the educational programming we watch in our household earmarks us as demographically more mature viewers.

After all, during the day the TV is often left on for no other reason than to provide background noise for the dogs and no "consumer" is actually watching it anyway!

What we really need is on-demand program selection so our interests are more specifically defined and we are not automatically profiled by the overall channels we watch. Furthermore, now that so many of us have smart TVs, broadcasters should take a cue from Facebook and give us the opportunity to give ads a thumbs up or thumbs down then remove all ads for products that we have indicated we are absolutely not interested in.  Of course that would mean satellite and cable providers would FINALLY have to surrender their antiquated marketing model of channel tiers!!

Related articles
Enhanced by Zemanta

Monday, August 20, 2012

Augmented Reality Apocalypse basis of new YouTube series H+

I was browsing through my newsfeed on Facebook and stumbled across a reference to a new digital series produced by Warner Brothers and being offered through Youtube instead of the typical cable or satellite services.  It's called H+ and I found the trailer really intriguing:


I'm a fan of apocalyptic fiction and a series where the apocalypse is the result of a technology gone awry is irresistible to me.  The basis for this series is a new technology where augmented reality is delivered via brain implant.  Apparently it works wonderfully well for a time until about 1/3 of the world's population (the early adopters) die leaving the management of the world to mostly people from third world nations.  I'm sure Mitt Romney would find this scenario mortifying!

It will be interesting to see how this plays out not only from an entertainment perspective but from the viewpoint of Warner Brothers who is attempting to tap into the large numbers of people who are "cutting the cable" now and resorting to online streaming as their primary source of media.

The PR says each episode will be only eight minutes long and there will be 48 episodes with two episodes released every week.  I'm a little confused about why the episodes are so short.  Although it's true most online viewers are used to relatively short videos on YouTube, the number of us with smart TVs that can watch YouTube on the big screen is growing rapidly and we would certainly prefer the more traditional episode length.  However, the director, Bryan Singer, hopes web viewers will actually rearrange the episodes and search for clues to solve mysteries almost like a dynamic real-time video game ultimately changing the way we consume video entertainment.

Singer certainly has hit upon an appropriate emerging technology to use as the platform for his new series.  Although augmented reality has been discussed for quite some time (It was a major topic at an Emerging Technologies Conference I attended back in 2006, actual implementation has been a bit slow with Google's "Project Glass"  being probably the most familiar application to date.  Project Glass involves the use of specially equipped glasses that combine information from the internet with GPS location to display data appropriate to a user's location as they move through their environment. Google had initially indicated the technology would be released in 2012 but now they are projecting a consumer grade product will probably not be available for sale until 2014.

However, smartphone users have already found other augmented reality mini-applications to be useful.  One application I recommended to the nature photographers in the Emerald Photographic Society is Peak AR.  It is an application that uses your smartphone's camera and GPS to identify nearby mountain peaks by simply pointing your smartphone's camera in their direction.

Another really useful AR app is named iOnRoad.  This app monitors such things as whether your car is straying outside your lane, advises you of insufficient headway and warns you if a collision is eminent.

Google Goggles will let you scan a painting and provide information about the artist and a description of the work.  Wikitude lets you pull up Wikipedia entries on objects or landmarks simply by focusing your camera on them. The app also finds mobile coupons and discounts for local stores.

So, the technology in Hplus is already here - just not implanted as yet.  As for the digital series plot, I would offer an alternative story line.  What if the new AR modules begin projecting frightening imagery so real looking that people can no longer distinguish real from virtual? But I guess that story line will have to wait for another day!

Update: I watched the first two episodes using the YouTube viewer on my smart Samsung TV.  The first episode was almost 8 minutes but the second was only about 4 minutes including about 1 minute of credits.  I felt like I watched more credits than program!  I see the next 3 episodes are also shorter than 8 minutes.  Come on guys!  There are a lot of us with smart TVs who don't have the "Play All" option like the regular YouTube website has and having to scroll to and start each subsequent episode is a pain - especially when they are not presented in order on the TV YouTube app search function!


Enhanced by Zemanta

Are banks ripping off consumers with online banking?

Today, I received a notice from Chase that my credit card payment that was due on Sunday had not been received yet.  Since I use online banking from my local credit union to pay my bills, I immediately went online and saw that the payment to Chase was withdrawn from my account electronically on Friday, August 17.  So I called Chase and their customer service agent told me that there is always a delay "getting" an electronic payment and that they didn't "receive" my payment until this morning Monday, August 20, and they had assessed me a $25 late fee.  I told the agent that an electronic transfer is an instant process and the money was withdrawn from my account on Friday and would have been instantly received by Chase on Friday.  If their computer system does not attribute that money to my account until they perform some manual process, it is hardly fair to charge me a $25 late fee since the money was received on Friday and was sitting in Chase's funds merely waiting for someone to allocate it as a payment to someone's specific account on Monday.  I have seen other credit card companies say in their fine print that if a payment is due on the weekend and funds to pay it are electronically "sent" on the weekend but not officially posted until Monday, the funds will be recognized as received on the weekend.  Apparently, Chase does not follow this policy!

I asked if, since it was obvious I authorized the payment in time and have an excellent payment history with Chase, always paying each statement in full by the due date until now, they would refund the late fee.  But the customer service clerk flatly refused.  So I asked to speak to a supervisor then she said she would refund the late fee and asked me if I still needed to talk to a supervisor.  I told her no if she was issuing the refund I didn't need to talk to a supervisor then but that I should not have had to ask to speak to a supervisor.  At that point, without any reply, I just heard the tones of someone dialing a phone then hold music.  After a brief wait, a man came on the phone who I assumed was the supervisor.  I again explained what had happened with the online electronic payment and asked for a refund of the late fee and he agreed.

But, is this the new way banks are trying to slip in more fees on an unsuspecting public?  An electronic transfer cannot occur unless both sender and receiver participate.  The sending institution is including all information necessary to earmark the funds being received as to the payor's identification.  Even if the computers at the receiving end must run some other subroutines to actually apply the information to a customer's account at a later date, the fact that the transmission occurred on a specific day should be the basis for recognizing receipt of the funds.  Before I retired I used to design databases and know that as long as the two institutions participating in an electronic exchange have their database fields mapped properly, the exchange can be handled in a millisecond without human intervention.  So why does the Federal Banking Commission allow banks to get away with slapping customers with a big late fee when the receiving institution has actually had the funds in their possession before the due date???
Enhanced by Zemanta

Friday, July 06, 2012

Will Amazon expand TV Play option to facilitate ala carte video on demand?



Just noticed that Amazon is now offering a "TV Play" option for future episodes of selected television series like popular series produced by AMC.  This is like video-on-demand for each episode you may have missed of a TV series you may be following.

I noticed this option when I received a postcard from DISH network mentioning a special offer available for DISH customers that were watching some of the original series produced by AMC but now no longer available through DISH (because of a pricing dispute) like Breaking Bad, The Walking Dead and Hell on Wheels.  Since the postcard mentioned Amazon Video as their recommended viewing alternative, I went up on Amazon and noticed the new TV Play option for Hell On Wheels (about the construction of the transcontinental railroad) that we were following on AMC.  So I called the phone number DISH provided on the postcard to see if they had some kind of coupon code and they actually gave me a credit on my DISH account equivalent to the cost of streaming this coming season's episodes of Hell On Wheels using Amazon Video.

Is this one of the first cracks to appear in the armor of the tier-based cable and satellite pricing structures?  If so, I applaud Amazon Video for taking a bold step to offer this service and DISH Network for being concerned enough about their customers to offer the equivalent of a credit for a loss in viewing opportunity created by a contract dispute.

However, I would urge Amazon (and their studio partners) to be a little more reasonable in their pricing if this service is expanded to other channels.  $2.84 per HD episode ($1.89 per SD episode) seems a bit steep for only one episode of one series on one channel. HBO charges even more - $3 per HD, $2.23 per SD episode (at least for Game of Thrones) if you order a full season.  I think they should recalibrate and look more toward the 99 cents per episode price point.

However, if I could order per series for some offerings and per episode for other channels (like a particular program on Nat Geo), I would definitely consider relinquishing most of my cable tiers and go back to basic channels only.  As it is I pay more than $100 per month for America's Top 250 so I can get access to History Channel International but end up watching Netflix most nights  anyway.

I actually prefer to watch episodic dramas on Netflix so I can watch back-to-back episodes, allowing me to follow continuing story lines more closely and identify more deeply with the main characters.  I also don't have to worry about the network skipping weeks and preempting regular programming for things like sports playoffs or political conventions.  If that happens I often lose track of when the series I am watching will resume and miss several episodes all together making it difficult to pick back up where I left off.  This happened to me with "Heroes" and with "Flash Forward", two excellent series but both interrupted repeatedly by network "special" broadcasts of other programming that caused me to become confused about what was happening and lose interest.

What we may really need is to go back to basic broadcast networks that focus on news and talk shows and use ala carte streaming for original dramatic series and educational or edutainment programs.  Sometimes I marvel at technology advancements that end up going full circle and resurrecting older business models.  Years ago when PC networks were first being developed and individual workstations had relatively limited hard drive space, we used to encourage our users to store all of their data on the network server.  Then PCs began to have much larger hard drives and people began saving large multimedia files and our network servers became overwhelmed so we took a step backward and asked people to store only files being shared with others on the central network server and use their local hard drive for their own personal data.  Then cloud services came along and network infrastructure improved so much in transmission speed that now we again urge network users to store their data in the cloud and not on their hard drives any more.  So we are once again almost back to where we started!

Obviously, methods of delivery for video entertainment are in a dramatic state of flux right now and it will be interesting to see how everything shakes out. 
Enhanced by Zemanta

Tuesday, June 26, 2012

Taking a leap of faith with VuDu and Ultraviolet

Well, after three phone calls to VUDU customer care to discuss the details of their new Disc-to-Digital service linked to Ultraviolet, I have decided to take the plunge and took my first batch of favorite movies to Walmart to be certified for my online film library.

I started out with 6 standard DVDs that I upconverted to HD and 1 Blu-Ray.  I paid $5 for each DVD and $2 for the Blu-Ray to add them to my VUDU and linked Ultraviolet account.  Other than the obvious reason of being able to access my movies when I travel, I had several other reasons to buy into this new program supported by Walmart and most of the major movie studios:



First of all, I felt the $5 upconvert and cloud storage option for standard DVDs that I own was a less expensive way to enjoy full HD versions without having to cough up $9.99 or more (especially with digital copies) for physical Blu-Ray discs.

Secondly, Ultraviolet allows up to five other people to be members of my account and enjoy my movies.  I'm hoping my son (who lives in Chicago and is an avid movie buff with slightly different tastes than I do) and I can share our movie libraries in this way. This wish to have reciprocal access was the reason for one of my phone calls.

Ultraviolet recommends setting up shared access by granting membership through the account management function.  However, I knew that I had never seen an interface option in VUDU to select a particular Ultraviolet library so I could choose between his collection and mine so I asked VUDU how to accomplish this.  VUDU tech support actually recommended sharing an Ultraviolet account with the same login name and password instead.  The technician explained that if my son and I each had separate Ultraviolet accounts, I would have to go online with my computer and unlink my Ultraviolet library and link to my son's Ultraviolet library every time I wanted to browse and watch one of his movies instead of one of mine on one of my authorized devices.  If we both used the same Ultraviolet account we could then link to it from our individual VUDU accounts and see all movies in the combined library but have individual access to VUDU for purchasing DVDs or authorizing a rental.   I suggested to VUDU tech support that a library selection function be added to the VUDU interface in the future so we could use Ultraviolet's account management function someday since it provides customized movie suggestions based on individual taste and lets each user customize the interface for their own preferences.

When I discussed this new service with my son, he wondered if the special features available on the DVD would be included in the digital online version.  He also was wondered if you had to surrender your original discs when you took them to Walmart for verification.  This prompted another phone call.  VUDU told me that unfortunately the special features were not included in the online version but I would not be asked to surrender my original disc so I could always watch the special features from the original disc.  

This was confirmed when I took my movies to Walmart.  The photo center customer service agent just retrieved my VUDU account by using my email address. I had already queued up the movies I wanted to add to my online library so she just had to enter a confirmation code.  Then, she rubber stamped each DVD around the center hole so the disc could not be used by anyone else in the future to verify a different online library and returned the perfectly playable discs back to me.  I paid the advertised rate of $5 per DVD converted to HD and $2 for the online version of the Blu-Ray.

When I got home and logged into my Ultraviolet account my movies appeared in my library.  They also appear on my VUDU account.  So I guess I'm all set.

Now playing my library with the VUDU application works on my PC or a Mac and on my Samsung TV that has VUDU already installed in its suite of internet applications.  VUDU also works on an iPad (which, alas, I still do not have, making me still a victim of iPad envy) but there is no Android-compatible version available yet and my phone and Nook color run on Android.  However, I downloaded the newest version of Flixster to my phone and it has an option under your account to link it to your Ultraviolet library and my movies appeared under "My Collection" on Flixster and I was able to play my movies beautifully on my Droid 3 using it.  

My Nook Color that I have rooted to a full Android tablet using a preconfigured SD card from Root My Nook Color, was a bit trickier.  First I attempted to update the Flixster application that was preinstalled from the Amazon App Store (because I always have better luck with it instead of the Google Play Store).  It updated OK but I discovered it did not offer me any option to play my movies, just watch a trailer.  So I launched the browser and navigated to the Android Market and updated my Flixster again and this time it was the latest version that included the option to stream or download my movies.  However, when I selected "Watch Now" it kept saying it was initializing.  If I was seriously interested in watching movies on my Nook color I would have attempted a download instead to avoid problems with wifi speed and activity.  But, I usually use my Nook color in the living room where I can watch movies on my regular HD TV so I guess it isn't critical at this point.  I will email the developer of Root My Nook Color and see if he has any suggestions to tweak the Nook color for streaming from Flixster.  I have no problem streaming Netflix on my Nook and trailers play fine on Flixster so there may be just a setting or something that needs to be adjusted.  I had also moved the Flixster application to the 16 GB SD Card I have in the Nook instead of leaving it running from the internal memory and maybe that is the problem.

There are a few things to remember about video quality differences between devices when using VUDU.  When watching movies through a web browser, the highest video quality you can currently stream is standard definition.  I expect this to change in the future.  At present, even though you can stream 3D from the regular VUDU application, 3D digital copies are not yet available through the Disc-to-Digital program.  

I also learned VUDU's HDX quality includes 7.1 surround sound if you have a home theater system capable of that quality.

Anyway, I now hope the Ultraviolet consortium (and VUDU, Flixster, etc.) is wildly successful so I can count on them to be my online movie repository.  My experiences have been quite favorable so far and I look forward to a long relationship.  Now if Disney would quit messing around with their own cobbled up system and join the party!  I bought John Carter 3D with their version of a digital copy and all it let me do was copy the movie to my own computer.  I can copy it from there to a Windows Media compatible device but that's an awful lot of wasted disk space when streaming from the cloud is so much better.  Come on, Disney!  Where's your imagineering???

Related articles
Enhanced by Zemanta