jump to navigation

Is Clippy the Future? February 8, 2013

Posted by Andre Vellino in Artificial Intelligence, Collaborative filtering, Data Mining.
add a comment

iwblogoThe student-led Information without Borders conference that I attended at Dalhousie yesterday was truly excellent – as much for its organization (all by students!) as for its diverse topics: the future of libraries, cloud computing, recommender systems, sciverse apps and the foundations for innovation.

At the panel discussion in which I participated, I suggested that to predict the future one need only look at the past. To predict the iPad one needed only look at the Apple Newton (which died in 1998). What was the analog, I wondered, for an information retrieval tool, now dead and buried, that might still evolve into something we all want in the field of information management?

I proposed that the future of information retrieval might be something like an evolved Office Assistant, (affectionately coined “Clippy”) – the infamous, now deceased Microsoft Paperclip that assisted you in understanding and navigating Microsoft products.

My vision for a next generation Clippy was clearly not well articulated since it prompted the following tweet from Stephen Abram:

abram-tweet

I think that Siri, (about which I posted a few years ago) belongs to the old Clippy style of annoying and in-the-way-of-what-I-want-to-do applications. I am surprised it has survived so long and was promoted by Apple so strongly. I predict it will join Clippy, Google Wave and Google Glasses on the growing heap of unwanted technologies that were not ready for prime-time.

Watson (who is now going to medical school, and about which I also posted a couple of years ago) is, however, just the sort of Natural Language Understanding component technology that I have in mind for for an interactive, personal information assistant. When a computer that now costs three million dollars with15 terrabytes of RAM can fit in your pocket and cost $500, a Watson-like system that understands natural language queries will be an important component of Clippy++.

What neither Watson nor Siri have – and this is what I foresee in my crystal ball is the most significant attribute about “Clippy++” – is personalization and autonomy. What will make true personalization possible with “Clippy++” is our collective willingness to accept the intrusion of a mechanical supervisor that learns from our behaviour about what we want, need and expect.

This culture-shift is happening right now – we gladly and willingly disclose our information consumption habits to supervisory software and data-analytics engines in exchange for entertainment and social networking. It won’t be long before we’re willing to do that for serious, personalized information management purposes as well.

The key, though, is going to be the interaction – the dialog that we have with Clippy++ – and it will have to have explanations for its actions and recommendations. That’s going to be the hallmark of its evolution to Machina Sapiens.

The End of Files December 8, 2012

Posted by Andre Vellino in Data, Digital library.
add a comment

A few weeks ago, I boldly predicted in my class on copyright that the computer file was as doomed in annals of history as the piano roll (the last of which was printed in 2008 – See this documentary video on YouTube on how they are made and copied!)

This is a slightly different prediction than the one made by the Economist in 2005: Death to Folders. Their argument was that folders as a method of organizing files was obsolete and that search, tagging and “smart folders” were going to change everything. My assertion is the very notion of a file – these things that are copied, edited, executed by computers – will eventually disappear (to the end-user, anyway.)

The path to the “end of files” is more than just a question of masking the underlying data-representation to the user. It is true that Apps (as designed for mobile devices) have begun to do that as a convenient way of hiding the details of a file from the user – be it an application file or a document file.  The reason that Apps (generally) contain within them the (references to) data-items (i.e. files) that they need, particularly if the information is stored in the cloud, is to provide a Digital Rights Management scheme. Which no doubt why this App model is slowly creeping its way from mobile devices to mainstream laptops and desktops (viz. Mac OS Mountain Lion and Windows 8).

But this is just the beginning.  There’s going to be a paradigm shift (a perfectly fine phrase, when it’s used correctly!) in our mental representations of computing objects and it is going to be more profound than merely masking the existence of the underlying representation. I think the new paradigm that will replace “file” is going to be: “the set of information items and interfaces that are needed to perform some action the current use-context”.

Consider as an example of this trend towards the new paradigm, Wolfram’s Computable Document Format. In this model, documents are created by dynamically assembling components from different places and performing computations on them.  In this model there are distributed, raw information components – data mostly – that are assembled in the application and don’t correspond to a “file” at all. Or consider information mashups like Google Maps with restaurant reviews and recommendations are generated as a function of search-history, location, and user-identity.  These “content-bundles”, for want of a better phrase, are definitely not files or documents but, from the end-user’s point of view, they are also indistinguishable from them.

Even, MS Word DocX “files” are instances of this new model.  The Open Document XML file format is a standardized data-structure: XML components bound together in a zip file. Imagine de-regimenting this convention a little and what constitutes a “document” could change quite significantly.

Conventional, static files will continue to exist for some time and version control systems will continue to provide change management services to what we now know as “files”. But I predict that my grand children won’t know what a file is – and won’t need to.  The procedural instructions required for assembling information-packages out of components, including the digital rights constraints that govern them, will eventually dominate the world of consumable digital content to the point where the idea of a file will be obsolete.

Marissa Mayer Wants to Read Your Mind August 14, 2012

Posted by Andre Vellino in Collaborative filtering, Digital Identity, Personal identity.
add a comment

At about minute 3 of Charlie Rose’s Green Room interview with Marissa Mayer, the newly minted CEO of Yahoo offers a vision of the mobile future and asks “How do we create a search without search? Can we figure out the information you need before you even have to ask?” And, she says excitedly, “that’s really like mind reading technology!”

The inference? Be prepared for Yahoo to read your mind!

I have been a proponent of personalization since 2000, when I worked on developing “Personal Identity Management” services at Nortel. The idea at the time was (for a telecom company) to enable IP devices (routers / gateways) to track / manage / control your on-line identity and provide identity services (single sign-on, personalization of news services, etc.) to the user.

This was conceived at about the time that Microsoft Hailstorm was being launched. The only fundamental difference was – which service provider – “network access” vs. “operating system” vs. “third party service” – would be the trusted source for managing your identity.

From a public relations point of view Hailstorm and its successors Microsoft Passport, and Wallet, were a disaster. Invasion of privacy, identity theft, all the usual public anxiety buttons were pressed and Microsoft dropped a lot of these products – or at least gave them a makeover.

Yet, a few internet generations later, these ideas persist.  Google didn’t make a big PR campaign of it, but everything at Google is about personalization and localization as illustrated most graphically by the (dystopic?) Google Glasses video.

But – fortunately, I might add – I am noticing a (small) swing of the pendulum away from machine-learning, Netflix-style personalization towards a “how do you want it?” style of personalization.

For instance, Google News used to be fully and automatically biased towards your location. Since the summer of 2011, Google has given the end-user a great deal more control.

Marissa Mayer may want to read your mind, but I know that most people don’t want to have their minds read by machines. I think the trend towards great user-control will eventually spread to more personalization and recommender services. I hope so anyway.

The Future of Universities is Here July 19, 2012

Posted by Andre Vellino in Open Access, Universities.
add a comment

An impressive list of 16 universities (including the Ecole Polytechnique Federale de Lausanne and the University of Edinburgh) have now signed up with Coursera to offer free on-line courses.  I audited one a few months ago on Natural Language Processing (from Stanford) to see what it was like – it was stunningly good.

My very first thought was “the future of conventional universities is in doubt“. This course alone had 42,000 registrants, 24,000 of which watched at least one video. Only 1,400 of the registrants got a “certificate of achievement” (i.e. completed the course and handed in all the assignments) but in the meantime there were 800,000 video-downloads of the courseware.

Distance-learning or on-line courses have been around for a long time – in the same way that “finger”, “who” and “chat” in Unix had been around a long time before Facebook, Linked-In and Instant Messaging.  The difference now is that major Universities are jumping on the bandwagon and offering them for free.  Why? Perhaps because of decreasing enrolment: free on-line courses are a way to recruit students from everywhere and to show them the best of what universities have to offer.

But also (in the US anyway), education is a business (see the Frontline documentary on the business of higher education: College Inc.)  That universities are feeling the financial pinch and being pressed by their boards to be more agressive in the marketplace was perhaps most visibly illustrated at the University of Virginia (the case against on-line education is elegantly articulated by Mark Edmundson – a professor of English at the University of Virginia – in a New York Times OpEd article).

Making courses on-line available for free will be a moneymaker when they start counting towards a degree, which clearly inevitable in the long run. However, I didn’t expect this development to come so soon after the beginning of the experiment. The Seattle Times reported just yesterday that the University of Washington is going to be offering some of their Coursera courses for credit.

Canada, in the meantime, has its own Canadian Virtual University which lists over 2,000 courses and 300 degrees and diplomas available on-line. The difference with Coursera is that the CVU is not free.

Anyone see any parallels with the publishing industry here?

Government Research in Canada July 8, 2012

Posted by Andre Vellino in Government Science, Universities.
add a comment

When I started as a Research Officer at the National Research Council six years ago, the idea of “research” – in the sense of systematically studying a topic for the purpose of advancing knowledge in the field – was not only encouraged but constitutive of the job description. In most respects, the work of an NRC Research Officer was indistinguishable from that of a University Professor – minus the teaching responsibilities.

Since then, there has been a gradual but significant shift in the function of Government research institutions in Canada. For instance, according to a presentation given to “Re$earch Money” by the president of the NRC, its Vision is:

To be the most effective research and technology organization in the world, stimulating sustainable domestic prosperity.

And its Mission is

Working with clients and partners, we provide strategic research, scientific and technical services to develop and deploy solutions to meet Canada’s current and future industrial and societal needs.

The first question that comes up with the Vision is: what is a “research and technology organization”? That phrase – “RTO” for those in the know – means something quite specific. It is a label for the set of things that includes such institutions as the Fraunhofer Institute and Battelle but also Finland’s VTT (“Business from Technology”) and Nato’s RTO.

Organizations like that do interesting things: they are catalysts for exchanging information, they set strategies, give advice, design new products, patent processes and bring mature ideas to commercial reality.  All of this is useful and important but it isn’t “basic research”, at least not in the sense of “advancing knowledge”.

So what is happening to basic research in government?  It is being outsourced to universities. The executive director of the Canadian Association of University Teachers (CAUT), James Turk, put it this way in an op ed column in the Ottawa Citizen a few months ago:

[Minster Goodyear] claims that [the NRC] no longer needs to [undertake basic research] because universities today play that role.

But, Turk also points out,

Many university-based researchers rely upon the NRC for their scientific work. By gutting the basic research program of the NRC, the government will be weakening university research.

Thus, from the government’s point of view, basic research should be an externality because it incurs long-term costs and no short-term benefits. By outsourcing research to universities long-term costs are downloaded to the provinces.

This was Nortel’s strategy too in it’s later years (~ 1995), and it was RIM’s as well (see also Canada’s Vanishing Tech Sector).

Steve Jobs was Right about AppleTV UI April 22, 2012

Posted by Andre Vellino in Information, User Interface.
1 comment so far

AppleInsider reported a few weeks ago that Steve Jobs rejected – as long as 5 years ago – the newly introduced Apple TV user interface. Predictably, Steve was right: the new UI for AppleTV has some major flaws in not just one but several dimensions: usability, cognitive modeling and information organization.

Consider this snapshot of the old UI:

The top third of the screen is reserved for image thumbnails that correspond to offerings in the highlighted service.  The remote’s navigation buttons change only the horizontal and vertical menu choices and the menus correspond to the categories of services available. [The top-level thumbnails are also accessible to get to the item directly.]

Admittedly there are some problems with this way of organizing the user’s entertainment options.  One is that the top level categories are not all the same kind of thing.  “Internet” is a mode of delivery (which, of course, is also the mode of delivery for the rest of AppleTV content), whereas the others are descriptive of the kind of objects that are below the main menu item. What “Internet” means, clearly, is “other, non-apple applications”.  In addition, more recent AppleTV top-level menus also has the “Computer” category, meaning “Content streamed for your local computer running iTunes”, adding a second source-centered category.

However, at least the old interface makes some attempt at grouping content. Furthermore, the interface for the top-level navigation resembles in structure the navigation system implemented for each of the applications.  The interface has the consistency hallmark of Apple interfaces generally: learn the interface for one application and you know (more or less) how all the others behave.

Contrast this with the new interface.  In some respects, it is similar to the old one – thumbnails of content-images appear at the top of the screen, as expected and the content sources are more or less the same.

However, the artificial segregation by source or kind is eliminated altogether: all the applications on the same footing, iPad-App style.

The first serious problem starts manifesting when you scroll just one line down: the 1/2-page sized thumbnails disappear altogether.  Yet the selected applications (I bet) are still generating those thumbnails – you just can’t see them any more.

Right away, this gives screen real estate dominance to the first row of applications – Apple iTunes applications, naturally. Furthermore, you can’t go straight to the items in the thumbnails because you can’t see them any more.

The second major flaw comes from the mixed-mode cognitive models.  The first-level application-selection mode is (vaguely) iPad-like (without the ability to group apps, rearrange them or create screen-pages). However, once you’ve selected an application you’re back to the (more familiar and sensible) menu-navigation system.

What’s worse, though, is that the menu system for each application is now no longer consistent.  “Movies” (short for “iTunes Movie Store”) has a Mac-style top-level menu-bar rather than a right-side menu navigation bar like all the other applications. Gone is the consistent Apple look-and-feel.

If at least the user had the ability to group applications as they see fit and to delete the unwanted ones (why not, the iPod/iPad allows that?).

Theres just no doubt about it.  Steve was right.