Archive for category Semantic Web

Zuhandenheit and the Cloud UI

Derek Singleton from Software Advice recently wrote a blog article postulating that maybe it was now time for a set of standards associated with the user interface for cloud applications to be formulated. I think he was mainly referring to business applications in the cloud – you know, ERP systems, CRM systems, etc.

Reasonable enough concept, and easy to understand within the frame of retrospective analysis of technological (and, in many cases, social) advances. Standardisation typically allows one to conveniently abstract out (sometimes complex) details of particular technologies, such that one can concentrate on the task at hand, rather than having to focus on solving a foundational issue for every case. Standards allow for rapid incremental progress.

The ICT world is replete with examples of standardisation assisting in such a manner. Indeed, one could argue that most of ICT only works because of such an approach. I am writing this on a tablet computer, sitting outside in the early morning, experiencing one of the most beautiful environments in the world. The tall straight trees crowned in a verdant canopy juxtaposed against the bluest of cloudless blue skies stretching forever would bring joy to any soul in any situation. (Why, then, you should ask, would I be writing about Cloud UI standards when presented with such a sight? Well might you ask. Because there is no answer to such an antithetical query. Let us continue).

I can concentrate on thinking and writing what you are now reading because I do not need to think about how the buttons on the keyboard connected to the screen are translated into electrical signals, which are then interpreted by an operating system (in simple terms – I don’t want to explain everything about every little element of computers) into characters for input into a program, which will then issue instructions to display said characters on a screen, not worrying about how shapes are pixelated to create a readable representation for me to understand and continue writing from.

Not only do I not need to worry about such issues, but the creators of the program I am currently using (Evernote – fast becoming a “must-have” on every portable computing device in the world) also did not need to worry about such issues. The abstraction of the operating system from base hardware, according to various standards (de facto and de jure – in a sense) meant that they could concentrate on implementing a wonderful information management tool. Further than that, a standard approach to certain elements of use of the operating system meant that they could implement their program relatively straight-forwardly (I am not saying it might have been easy) on a number of operating systems.

Such capabilities would have been unthinkable fifty years ago (in ICT terms). I am also able to take advantage of a WiFi connection to sit outside and enjoy the world around me (looking up when the kookaburra call intrudes, to once more savour the sight), in order to record information I have typed through the internet on a server somewhere in the world (as well as on my tablet). Do I need to know about channels and frequencies, IP addresses and DHCP, routers and relays? I think not. Thank goodness I don’t. If I had to deal with any of these issues everytime I had to write something, nothing would ever get written. Ditto for the Evernote programmers. Ditto for the Android programmers. Ditto for providers of the cloud service running the data storage service. Ditto for almost everyone marginally involved in the vast network of relationships of functional delivery of this technological world, apart from a small number of people who must actually create or fix such networking software and hardware.

This is standardisation at work. And it works. Much more than it doesn’t work. And it creates and creates and creates. It applies not only to ICT, but to engineering generally, science generally, any technological endeavour generally, human centred and planned systems generally, and even throughtout society. If one wanted to wax philosophical, there could be some deep reasons behind all this.

Which indeed there appears to be. Through the work of Martin Heidegger (1889-1976), one of the most pre-eminent philosophers of the 20th century. Do yourself a favour (to channel a minor celebrity). Read up on the works of Martin Heidegger, even if you don’t read his actual works (he is a little hard to get into – not the least due to the very specific terminology that he uses, relying as it does on the original German). Heidegger attacked questions of the essentialness of Being, what it is to exist. Within the world in which we live.

His work, thus, addresses issues of how we, as humans, operate within the world, how we interact with things and other entities. This has applicability not only to general human behaviour, but also to computing – how we use and interact with computer systems (as artefacts in the world). This thesis was developed and written about by Fernando Flores and Terry Winograd in their seminal book “Understanding Computers and Cognition”. They described the Heideggerian concept of Zuhandenheit (“Ready-at-Hand”) in computing terms. In essence, ready-at-hand (or readiness-at-hand) is the concept of how some artefact / entity becomes part of our world in such a manner that one does not need to think at all in order to use the artefact. It’s place in the world (which includes how it is used) is at all times “ready” for us, to use (or relate to). There is no necessity for one to remove oneself from the world situation one is currently in, in order to deal with the artefact. It is only when the entity changes from being Zuhandenheit to Vorhandenheit (meaning “Present-at-Hand”) that is comes into the foreground of one’s attention, and must be dealt with consciously in some manner.

The famour example used is that of a hammer. In normal circumstances, for most people bought up in a civilised Western tradition (the context is rather important for the concept of Zuhandenheit), a hammer is “ready-at-hand”. One simply picks it up and hammers, without needing to consciously think about how to use the hammer. It is only when the hammer is broken in some manner that it becomes “present-at-hand”, when the item is now “present” in front of one, requiring one’s attention – in order to fix it, or work out how to use it in it’s broken state.

Present-at-hand means that one is not concentrating on the task required or desired, but rather, must focus attention and consciousness onto the entity which is present-at-hand, determining what to do with it and how to use it, possiby fixing it, before being able to then re-focus on the task-at-hand.

Present-at-hand is a necessity in many situations, particularly novel circumstances, but is positively detrimental if it becomes overwhelming. Items present-at-hand must fade into reasy-at-hand to be able to successfully navigate the complex and chaotic world one lives and works in.

This concept of Zuhandenheit applies directly to computing. One could argue that the most effective computing is one that is fully ready-at-hand. No need to think, simply access the computing facility and it is done for one. It is the dream of the artificial intelligence community (for many of them). It is represented in science fiction – such as the computer (with the voice of Majel Barrett) in Star Trek. Always there, simply need to talk to it, perfectly divines one’s needs and intentions, and then executes without error. On the bridge of the Enterprise, there is never a need to get the manual out to work out which menu item hidden five deep in a dialogue box, which arcane key combination, which parameter in which format for which API call one needs to know, understand and apply to get something done. If that was the case, the Klingons would have long ruled the Empire (to mix filmic metaphors a touch).

Of course, the current state of the art is light-years removed from the science fiction of Star Trek and other futuristic visions. But the basics of the concept apply to everyday use of computers. It would be a tedious working life if every time one had to type a memo, one had to look up help or read a manual or ask for assistance to perform the simplest of activities – to underline a phrase, or to indent a paragraph, or edit a mis-typed word. Work would barely be finished if every document was an arduous “hunt and peck” on the keyboard.

For most of today’s office workers, the QWERTY keyboard is ready-at-hand. One does need to think to use the keyboard (even if one is looking at the keys to ensure that the fingers are in the correct place). One can concentrate on what is to be said rather than where on earth is that “!” key. The readiness to hand only needs to marginally break for the level of frustration and problems to become apparent with the present-at-hand. If you were bought up in North America, England, Australia, etc, have you ever tried typing on a German keyboard, or a Spanish keyboard, etc? Have you gone to hit the @ key for an email address and found it is a completely different character? Now where is that @ key? I simply want to type in my email address, which typically takes all of one second, yet here I am desperately scanning every key on this keyboard to find a hidden symbol. Finally, after trying two or three other keys, there it is. Thank goodness. Only to next be snookered because the Z key is now somewhere else as well.

But being ready-to-hand does not necessarily mean the best or most efficient (or effective). Over the years, many people have contended that a Dvorak keyboard is a much better keyboard layout to use, from a speed, accuracy of typing, and ergonomic perspective. But, it has never caught on. Why? Mostly because of the weight of “readiness”, the heaviness of “being”, which is the qwerty layout.

People develop what is loosely called “muscle memory”. Their fingers know where to go, just through the muscles alone – they do not need to think about where their fingers need to be in order to type a word – the muscles in the fingers automatically move to the correct spots. No thinking required. All the mental processing capacity in one’s brain can be applied to what is being written rather than the typng itself (note the ready-to-hand aspect). It is just too much effort to re-train one’s muscles, with little perceived benefit. And if all the consumer population “profess” to wanting/using a qwerty keyboard, then all the vendors will supply a qwerty keyboard. And if the only keyboards commonly available are qwerty keyboards, then people will only know how to use a qwerty keyboard, their muscles will be trained, and they will only want to use a qwerty keyboard. And thus the cycle repeats on itself. Self reinforcing. Qwerty keyboard it is, from now until eternity.

Unless, there is a disruption or discontinuity. So, if there is no keyboard, how will one type? Or if there are only a limited number of keys, how does one enter text from the full alphabet? Enter the world of mobile phones, and then smartphones. With only a limited number of numeric keys, text can be entered using innovative software (ie predictive text). When first attempted, this situation is blatantly “present-at-hand”. It takes some time to get used to the new way of entering text/typing. But for most people, some continued practice whilst fully present-at-hand soon leads to such typing being ready-at-hand (although maybe not as effective or efficient as a full sized qwerty keyboard).

But, be aware, not all people make the transition from the present-at-hand to the ready-at-hand. Some people just can not use predictive text entry – they revert to previous methods of phone use, or no texting at all.

What happens if there are not even numeric keys. Thus, the touch sensitive smartphone (or tablet) can emulate a qwerty keyboard (or slightly modified qwerty keyboard) but the reduction in efficacy of the touchpad qwerty keyboard makes it rather less ready-at-hand (each time a character is not typed correctly, or multiple keystrokes have to be executed to enter a single character (such as when entering some of the special characters using the standard iPhone onscreen keyboard), or a word is auto-corrected incorrectly, then the artefact is “broken” – a little like the broken hammer, and it has to be used specially, now “present-at-hand” in order to achieve the end result) and thus susceptible to allowing one to learn an alternative process of entering text (a new present-at-hand leading, possibly, to being ready-at-hand) which may be more efficient. The barriers to learning anew are reduced, allowing the learning to be attempted, and the new thereby mastered.

Such thinking tends to be somewhat embedded in design culture. Take, for example, this excerpt from an article in Infoworld on the use of iPads in SAP:

“Bussmann is a big fan of design thinking, as he says the mobile experience is different even for things that can be done on a PC. He notes that people seem to grok and explore data more on an iPad than a PC, even when it’s the same Web service underlying it. He notes that PCs tend to be used for deep exploration, while a tablet is used for snapshot trend analysis. He compares email usage on a PC to email usage on a BlackBerry, the latter being quicker and more of the moment. If mobile apps were designed explicitly for the different mentality used on a tablet, Bussmann believes that the benefits of using iPads and other tablets would be even stronger.”

The point is that different artefacts have different contexts, leading to different modes of operation (new elements are ‘ready-to-hand’ which allow new modes of thinking and operation).

Good desgn (in a computing context) is all about readiness-to-hand. How to design and implement a facility which is useful without requiring excess effort. How to design something that works without undue learning and constant application of thought.

What does this mean in relation to UI standards?

Such standards are an attempt to generate a readiness-to-hand. If every keystroke means the same thing, across all applications, then “muscle memory” can kick in, and one need not bother learning certain aspects of the application, but rather, can commence using it, or use various (new) features, without undue effort.

A simple example. In many applications under MS Windows, the F2 key “edits” the currently selected item, say a filename within a folder, allowing one to change the name of a file. When working with many documents and files on a continuous basis, it has become almost instinctive for the second littlest fnger on the left hand to reach for and hit the F2 key, then type in the new name for the file. When hitting the F2 key does not result in allowing the filename to be changed, but rather, results in some other action, frustration soon sets in (and by soon, one means explosively and immediately).

The next question is – which key renames a filename? None of the current F keys. Access “Help”. Find that it is Shift-F6. Now, who would have thought that? Why pick Shift-F6? Why not Ctrl-Alt-Shift-F12? Or any other key combination. I am sure that the author of the software had a good reason to use Shift-F6 for file rename (indeed, if one carefully reviews all the key combinations for functions in the program, one can see some logic relating to the assignments of keys. Further, in some deep recess of memory, I recollect that the key assignments may relate to use with another operating system, in another conext, according to another (so-called) “standard”. None of which is much help when one is suddenly thrust from readiness-at-hand into present-at-hand and spends valuable time trying to achieve a very simple operation.

Fortunately, the author of the program wrote his code such that every function could be mapped to a different key (and vice-a-versa obviously). This means that I can re-map the F2 key to perform a file rename. A bit of a pain when it has to be done on every computer that the program is used on (the trauma and destructiveness of the PC-centric world – which, one day, the “cloud”-world will fix), but the benefit of having File-Rename ready-to-hand outweighs the cost of having to perform a key re-mapping once in a while (for me at least).

Unfortunately, other people may not be quite as “techno-savvy” as I am, not realising that it is possible to re-map the function keys. Or, programs are not coded in such a flexible manner, such that function keys can not be re-mapped within the program. Thus, people are “forced” to endure a facility that is plainly not ready-at-hand – and waste hours of valuable time, building levels of frustration that can never be relieved. It is no wonder that many people find dealing with computers difficult and “non-intuitive” (a nebulous ill-defined term that I normally eschew, in favour of the more technical and proper philosophical and psychological terms which may apply – but used here as a reference to the nebulous ill-at-ease feeling that people dealing with such non-ready-at-hand computing describe as being “non-intuitive”), and that people, once they have learnt one program or system etc are loath to learn another, and are slow to embrace additional capability or functionality within the program or system that they (purportedly) know.

Or, in another twist on not being ready-at-hand, the program (or operating system, or other facility) may simply not implement the desired capability. Thus, on the tablet I am currently using to type this exploration, there are no keystrokes to move back (or forward) a single word, even though the tablet has a keyboard attached (which is not too bad, but does miss keystrokes on a regular basis – thus making it also non-ready-at-hand in an irritatingly pseudo-random manner), and that keyboard has arrow keys and a control key on it. Using the Android operating system and Evernote (and other programs), there is no simply quick keyboard oriented manner to go back many characters to fix a typo (refer to the previous problem with the keyboard missing characters, necessitating the need to quickly reposition the cursor to fix the problem). Thank goodness that the operating system / program implemented the “End” key, to quickly go to the end of the line. Under MS Windows, Linux and other operating systems, and the major word editing programs on those systems, the Ctrl-Left Arrow combination to go back one word is typically always implemented. When it is isn’t, it is sorely missed.

No doubt, the developers of the Android tablet based systems are working with the paradigm that most of the navigation will be performed using the touchscreen capabilities – and therefore that will suffice for moving back a word or so. Little do they realise that forcing someone to take their fingers off the keyboard and try to place them on a screen, in a precise location is violently yanking them from being fully ready-at-hand (typing, typing, typing as quickly as they can think) into a difficult present-at-hand situation (have you ever tried to very precisely position a cursor in smallish text on a partially moving screen with pudgy fingertips?).

Thus, one of the issues for readiness-to-hand and computing relates to the context in which an element is meant to be ready-at-hand. Mix contexts, take an element out of one context and place into another, and suddenly what works well in one place does not work so well anymore.

Which leads back to the question of standards for UIs. Such standards will need to be thought through very carefully – lest they lead to further “presentness”, and not the desired effect of “readiness”. This is predicated, obviously, on the proposition that the main reason for promulgating and adopting standards is to effect a greater reasy-at-hand utility, across the elements of one’s computing usage. The standards are meant to enhance the situation whereby knowledge gained in the use of one system, program or facility can be readily transferred into the use of another system, program or facility.

So, promulgating a standard that “theoretically” works well given one context may actually lead to deterioration in readiness-at-hand within another context. This may be at the micro level – for instance, a standard for certain key-mappings which works well within a browser based environment on a PC may be completely ineffective on a smartphone or tablet. And even detrimental if the alternatives to using key are not well thought through on the other platforms.

The “breakage” may be at a more macro level, whereby standards relating to screen layout and menu’s and presentation elements may be completely inappropriate and positively detrimental when the program requires a vastly different UI approach in order to maximise its facility (efficacy and efficiency). Thus, such screen presentation standards for an ERP suite may be wholly inappropriate for the business intelligence aspects of that ERP suite.

Or, to put it another way, one may implement a BI toolset within the ERP suite that conforms to the prevailing UI standard. Its readiness-at-hand will be sub-optimal. But the hegemony of the “standard” will prevent efforts at improvements to the BI UI that do not meet that standard, thus stagnating innovation, but more importantly, consigning those using the system to additional and wasted effort, as well as meaning that such a system will not be used, or not used to the extent that it could or should. I am sure that many readers have numerous examples relating to implementing systems, which on paper, tick all the boxes, seem to address all criteria, yet are simply not used by those in the organisation that sch systems are intended for (or used grudgingly and badly). One of my favourite tropes in this area are corporate document management systems. Intended for the best possible reasons, have the weight of corporate “goodness” behind them (just like white bread), yet never seem to deliver on promises, are continually subverted, and blatantly fail the readiness-at-hand test, both for material going in and for retrieving material later. Unfortunately (or fortunately for this missive), that is a story for another day.

When faced with the intransigence of a “standard”, either de facto or de jure, it typically takes a paradigm shift in some other aspect of the environment to allow a similar shift with respect to the “standard”. Thus, in recent times, it was the shift in terms of keyboard-less touchscreen smartphones and tablets which required a rethink of the predominant operating system UI. For a variety of reasons, that UI (or, let us say, UI concept) quickly came to prominence and predominance (debateable but defensible if one considers that Apple, as the main purveyor of such UIs, has the greatest capitalisation of any company in the world at the moment. Not directly relating to the UI (ie there is no direct causal connection), but with the UI no doubt making a contribution to the success of the offerings).

It took that paradigm shift across a range of factors in the “world-scene” of computing to make such large changes to the UI standards. The incumbent determinant of the standard (in this instance, let us say, Microsoft Windows) would not make such large-scale changes. Their intertia, and the inertia of those using the system, would not allow large-scale changes to be made. Not until faced with the overwhelming success of an alternative that such an alternative was belatedly added to the UI for Windows (well, in what could graciously be called an embodiment of the new alternative UI, if that is what one so desires).

This is not to say that such intertia, or the definition of a “standard” (de jure or de facto) is necessarily a bad thing. Making too many changes too quickly, in any environment, leads much to be “broken” into present-at-hand, requiring too much effort simply to achieve day-to-day operation. Destruction of readiness-at-hand becomes so unsettling that little effective is achieved. Herein lies the rub with “change” within an organisation. Change is a necessity (change is simply another name for living and thrving. If one’s body never changed, one would be dead. Think of the biology – the deep biology), but any change means at least something becomes present-at-hand, at least for a moment, until it is processed and absorbed into being ready-at-hand. The secret to success at change management is minimising this disruption from ready-at-hand to present-at-hand to ready-at-hand again.

“Inertia” is the means whereby learned behaviours are most efficiently exploited to the greatest effect – PROVIDED that the environment or context has not changed. Such a change necessitates a re-appraisal, since, as per the definition above, something ready-at-hand in one context or environment will not be so in another (almost by definition).

Thus, a standard for the UI of Cloud based applications, principally ERP applications (and those applications orbiting within the ERP “solar system”) is a “GOOD THING” – given that the contextualisation of those standards is well thought through – well defined, well documented and very applicable, and that the awareness is always pre-eminent that such standards may retard innovation, and means whereby use of such standards can co-exist (in some manner, and it is yet rather unclear how this could actually work in practice) with a new or different set of UI for different purposes.

, , , , ,

No Comments

IBM Watson and Jeopardy

IBM’s Thomas J Watson Research Center in Yorktown Heights, NY has recently completed a new grand challenge – to program a computer to play the quiz game “Jeopardy”.

I have been following this (as have many many other people) – and it has been absolutely fascinating.
This link (http://www.youtube.com/watch?v=3bifUJCyMwI) to a youtube of the final session should also give you links to the previous sessions over the 3 days. The link to details of Watson (http://www-943.ibm.com/innovation/us/watson/) will no doubt also give you the relevant video links, and much more.  More information can also be read at Mashable in an article on Watson and interview with Stephen Baker.

Basically, IBM have developed a natural language processing and deep analytic question and answer system, using massively parallel processing and huge amounts of memory (as stated here: 2880 processor cores in 90 Power 750 computers and 15 terabytes of RAM) to implement a system which can answer any sort of general knowledge question (which have been asked in a variety of ways, including through association, analogy, puns, etc), and to get so many correct that Watson totally beat the best human players.

The results were fascinating.

At the end of the first day, Ken Jennings was on $4,800, Brad Rutter was on $10,400 but Watson was a massive $35,734 (I also answered the questions as they appeared on the screen and achieved $22,400 Рalthough one can not completely equate the results, since the physical presence of having to press the button first when the light comes on and then answer was not the same for me watching it on a computer screen).

At the end of the second and final day, Brad scored $5,600 before final jeopardy, wagered the lot to obtain $11,200 which totaled him $21,600 over the 2 days.

Ken did much better on the second day, managing a pre-final jeopardy score of $18,200 but only wagered $1,000 to finish with $19,200, to total $24,000 for the 2 days.

But Watson. Well, he (since we can really be anthropomorphic here) scored $23,440 before final jeopardy, wagered $17,973 to make his daily score $41,413 and a massive total of $77,147 for the 2 days.

(By the way, I managed $14,000 for the second day, wagered the lot and got the final jeopardy answer correct (it was Bram Stoker) to finish with $28,000 on the day and $50,400 over the 2 days).

The prize money of $1,000,000 awarded to Watson was donated by IBM to World Vision and to the World Community Grid, whereas half the second prize of $300,000 (to Ken) and $200,000 (to Brad) was donated to other charities.

Two important take-aways from this brilliant piece of research.

Firstly, this technology from IBM has so so many uses – not just in the medical field (as the first offerings appear to be) but also in the energy and resources fields, the urban planning fields, and certainly in the legal and justice fields. The ability to ingest natural language materials (such as legislation, case law, briefs, submissions, depositions, statements, judgments and miscellaneous other materials) and then to answer complicated questions concerning that material (and link to associated material not previously related to the matter) will be extremely important in the future.

Secondly, IBM Watson was truly amazing. Certainly a breakthrough in technology. But the human beings standing there, that did pretty well against the massive machine, were still, themselves, rather incredible. Humans, in essence, are still mighty powerful. The Jeopardy show had to be filmed on a special set built in the IBM Research Facility, because the computer system comprising Watson took up a whole room and was too massive to move. Whereas Ken and Brad simply walked into where ever they were needed and did their thing. Mind you, computer systems in the 1960’s and 1970’s took whole rooms – and their capability would now be eclipsed by an iPad or small notebook computer. Twenty years from now, Watson will definitely be in the palm of one’s hand (in one form or another).

No Comments

CyberDewey

 

 

 

 

This is G o o g l e‘s cache of http://www.anthus.com/CyberDewey/CyberDewey.html as retrieved on 15 Oct 2007 13:47:33 GMT.

A Hotlist of Internet Sites organized using Dewey Decimal Classification codes.

000 Generalities

  • 000 Generalities (427)
  • 010 Bibliography (60)
  • 020 Library and Information Science (47)
  • 030 Encyclopedias (14)
  • 040 Unassigned (0)
  • 050 Magazines (12)
  • 060 General organizations and museology (18)
  • 070 Journalism (44)
  • 080 General collections (9)
  • 090 Manuscripts and rare books (0)

500 Science

  • 500 Science (50)
  • 510 Mathematics (18)
  • 520 Astronomy & allied sciences (14)
  • 530 Physics (26)
  • 540 Chemistry (15)
  • 550 Earth sciences (28)
  • 560 Paleontology, Paleozoology (10)
  • 570 Life sciences (29)
  • 580 Botanical sciences (11)
  • 590 Zoological Sciences (50)

100 Philosophy and Psychology

  • 100 Philosophy (4)
  • 110 Metaphysics (2)
  • 120 Epistemology, causation, humankind (3)
  • 130 Paranormal Phenomena (6)
  • 140 Specific philosophical schools (2)
  • 150 Psychology (10)
  • 160 Logic (4)
  • 170 Ethics (2)
  • 180 Ancient, mediaeval, Oriental philosophy (0)
  • 190 Modern Western philosophy (1)

600 Technology

  • 600 Technology (10)
  • 610 Medical sciences, Medicine (131)
  • 620 Engineering & allied operations (81)
  • 630 Agriculture (27)
  • 640 Home economics & family living (80)
  • 650 Management & auxiliary services (54)
  • 660 Chemical engineering (7)
  • 670 Manufacturing (7)
  • 680 Manufacturing for specific uses (86)
  • 690 Buildings (5)

200 Religion

  • 200 Religion (6)
  • 210 Natural theology (2)
  • 220 Bible (1)
  • 230 Christian theology (1)
  • 240 Christian moral and devotional theology (0)
  • 250 Christian orders & local church (1)
  • 260 Christian social theology (1)
  • 270 Christian Church History (1)
  • 280 Christian denominations & sects (1)
  • 290 Other & comparative religion (3)

700 Arts and Entertainment

  • 700 Arts and Entertainment (38)
  • 710 Civic & landscape art (2)
  • 720 Architecture (11)
  • 730 Plastic arts, Sculpture (1)
  • 740 Drawing and decorative arts (38)
  • 750 Painting & paintings (3)
  • 760 Graphic arts, Printmaking & prints (6)
  • 770 Photography & photographs (5)
  • 780 Music (43)
  • 790 Recreational & performing arts (428)

300 Social Science

  • 300 Social Science (98)
  • 310 General statistics (6)
  • 320 Political science (29)
  • 330 Economics (97)
  • 340 Law (17)
  • 350 Public administration (28)
  • 360 Social services, associations (55)
  • 370 Education (123)
  • 380 Commerce, communications, transport (119)
  • 390 Customs, etiquette, folklore (15)

800 Literature

  • 800 Literature (42)
  • 810 American literature in English (9)
  • 820 English & Old English literatures (5)
  • 830 Literatures of Germanic languages (0)
  • 840 Literatures of Romance languages (0)
  • 850 Italian, Romanian, Rhaeto-Romanic literatures (1)
  • 860 Spanish & Portuguese literatures (0)
  • 870 Latin & Old Latin literatures (1)
  • 880 Hellenic literatures, Classical Greek (1)
  • 890 Literatures of other languages (1)

400 Language

  • 400 Language (11)
  • 410 Linguistics (14)
  • 420 English (7)
  • 430 Germanic languages [German] (1)
  • 440 Romance languages [French] (2)
  • 450 Italian, Romanian, Rhaeto-Romanic (1)
  • 460 Spanish & Portuguese languages (6)
  • 470 Italic languages [Latin] (1)
  • 480 Hellenic languages [Classical Greek] (1)
  • 490 Other languages (11)

900 Geography & History

  • 900 Geography & History (13)
  • 910 Geography & travel (74)
  • 920 Biography, genealogy, insignia (12)
  • 930 History of ancient world (6)
  • 940 General history of Europe (34)
  • 950 General history of Asia [Orient, Far East] (18)
  • 960 General history of Africa (7)
  • 970 General history of North America (14)
  • 980 General history of South America (4)
  • 990 General history of other areas (12)

 

About CyberDewey

The Dewey Decimal Classification is comprised of 10 Classes (Generalities, Philosophy, Religion, Social Science, Language, Natural Science, Technology, Art, Literature, and History). Each class is further subdivided into ten Divisions, and each Division into ten Sections.

This page displays the one hundred Divisions, each of which is displayed on a subpage. The numbers in parentheses show the number of links in each Division.

Related Materials:

  • My article Organizing Computer Resources tells the tortuous path I followed before discovering that Dewey is a robust, general-purpose way to organize just about anything.
  • The Subject Index contains an alphabetical list of the terms used in the division headings.
  • The FAQ contains pointers to other Dewey resources, some suggestions for using Dewey, and a status report.

 


Internet Cataloguing-in-Publication Data

Mundie, David A.     CyberDewey: A Catalogue for the World Wide Web / David A. Mundie     Pittsburgh, PA : Polymath Systems  1995     1. Bibliography     011.3 dc-20                                         [MARC]

No Comments