Event Date: Oct 16 2009 – Oct 17 2009
Event Website: Event Webpage
City: Toronto, Ontario
Primary Contact Name: William J. Turkel
Contact Email: email@example.com
The participants were asked to gather into groups representing research communities, individual researchers, students and teachers, librarians, developers, content providers, and so on. Each group was asked to explore the issues that mashups and APIs raised for their particular stakeholders.
Research Clusters and Communities
Participants: Kevin Kee, Sean Kheraj, Shekhar Krishnan, Stéphane Levesque, Alan MacEachern, Geoffrey Rockwell, Tom Scheinfeldt.
GR: “APIs are useful translators for people who are going about their own business”
We must consider APIs:
i. within clusters
Universities support collaboration between colleagues within a university, but not those colleagues outside the university
ii. cluster to cluster
How do we transfer expertise in one cluster to another cluster?
iii. from clusters to public
Google as a model for API development
Do we feel threatened by Google because it is a quasi university (or pseudo university, depending on one’s perspective)? What can we learn from Google?
* Google never tried to do or be everything (a la Yahoo and its portal); instead they built lots of individual tools; in the same way each of us should try to do one thing well.
* Google built very simple interfaces (one field, one button)
* Users learn how to use Google apps by trying them
* Its apps are transparent (scholars tend to make things opaque)
* Google gives employees 20% of their time to do what they want; we should do this with our grad students
* Google gives credit (humanists are not good at giving credit (scientists are much better))
How should we use APIs to support collaboration?
We need to define what scholars do (this is what Zotero and Omeca have accomplished – these focused on a process of scholarship) that large corporations (e.g. Google) won’t or can’t do. Scholars:
* build databases
* create content
* print books (that last)
* go slowly
* focus on detail
* think long-term
* teach undergraduates and graduate students
* run journals, symposia, etc.
Potential deliverables (given what scholars do):
* Run a conference using GoogleDocs, where participants pay via PayPal, and another API provides simultaneous translation
* Develop a research exchange where scholars swap tasks (a credit/exchange)
i. knowledge of what is there
ii. the capacity to integrate it
iii. training for our students
We should build:
a. social APIs (not technical APIs)
b. tools and techniques to engage the public (e.g. so that members of the public could help researchers to do translation work)
Participants: Jeff Antoniuk, David Bamman, James Chartrand, Richard Deswarte, Stéfan Sinclair, William J Turkel (notes), Raymond Yee.
Utility of pair programming (as in extreme programming).
Humanities tool developers often in the position of inheriting code other people have written, or inheriting student projects: patchwork of pieces. Money is always a problem with humanities projects. Faculty hire students who don’t have any experience writing solid, maintainable code. Sticker shock when they hear the hourly rate that professional programmers charge.
Organizational models: (a) research innovation manager hands work out to contract programmers; (b) team of superheroes takes on a contract, and parcels out the labour depending on team member’s specialties; (c) ‘army of one’ problem when you can only afford one person and want him or her to do everything from sysadmin to web-mastering; (d) Perseus: students develop theses in conjunction with grant funding, in parallel with computer science department. More stability as you scale up a team; more crossover from one project to the next.
Developers have desire to learn technical subjects, new software. Need to make our tools simple.
Audiences: general humanities. Question: for any group of people, when will they adopt a new tool? Where on the curve do they join? Trailing edge? How to get people to adopt something new? (a) one funeral at a time, i.e., convert their children instead, (b) convert them by analogy to other things they are already doing, (c) use a product like Zotero as a stepping stone, (d) convince a high-profile person to adopt new tool or methodology, (e) smuggle tool into web page by providing more than just simple searching.
Problem with a basic lack of honesty when scholars look on the web and then cite a source as if it were an actual library book they consulted. Seems that scholars are afraid that if they cite the electronic version it may change ‘out from under them’, but the convenience of consulting online resources is too tempting to pass up. Possible to keep and make accessible the history of changes and versions (e.g., Wikipedia history page, History Flow, Versionista). Scholars need to understand the profoundly philological nature of tools like wikis, and realize that it is possible to cite a particular version or revision.
Tools to automate the creation of APIs; scrAPIs. Wikis of scrapers (like Zotero translators). Possible to crowdsource the problem of scraping? Content providers might like to provide an API but it isn’t high on their list of priorities; maybe a service to match scraper to content provider.
Nice to have scraping functionality in browser, but Firefox extensions are fragile. Chickenfoot provides one model.
Raymond Yee: idea of a “personal service bus”: inform when library books are due, send on message bus; calendar listens on bus and makes an entry; notification by SMS. What kind of notifications would you want? How does it work? REST-based, listener gets URL and periodically makes a call; rules determine what happens; bus is dead simple. Need to minimize all the copying and pasting we do; minimize the amount of direct coupling, which is easy to implement, but which explodes with the size of the network.
Customer relation management (CRM) is big business — is there a scholarly equivalent?
Our Ontario / Alouette
Notes from Walter Lewis.
Dan Chudnov: talking about web as API.
- linked data: we have the data to support linked data (such as pointers to external subject lists and geography) in the Vita interfaces
- clean uri’s: already in the spec for the next build of Vita
- meta link tags: will add to spec for next build of Vita. The argument for clean values that facilitate caching is a very persuasive one (not that we’re expecting 9000 requests per second)
- atom protocol: need to get the opensearch support from dev into production
Zotero support: need to look at what other possibilities for Zotero support exist via its api’s; further the discussion of tagging from “unapi” to a more specific “source” value
NYPL maps: what an extraordinarily powerful interface for “warping” (aka georectifying) two-dimensional maps into real space and placing them in layers and times. We have a variety of content that could be “placed” in those map layers. The current approach with x/y co-ordinates of the scans is not nearly as extensible.
Open Annotation Collaboration offers a fresh look at the “comments” problem set. We need to track.
We need to complete the documentation of our interfaces to our web services (probably after the current round of development is completed)
Visualizations: for documents with significant blocks of text we should look at word cloud visualizations; we now have sufficient data to do some time lining of results (supported by the new temporal facet schema)
It is possible that some projects will be in a position where, if their metadata is ingested into the OurOntario/Alouette index, that apart from the exposure the services get via the portal, the additional services built on the portal may be of use in other use cases (rss, atom, rdf, mods, dublin core, raw solr + spell).