Archive for category Computing

Back to the Future – But, Seriously?!?

This article is exactly why I think Corporate IT (ICT, whatever it is named) has completely lost its direction over the last decade or so:
“How to transform enterprise architecture into business architecture”, (from SearchCIO in TechTarget).

Really!  We knew about business driven IT in the 80’s and 90’s – why on earth are we having to re-invent it again in the 2010’s?

Not that I disagree with much that is said in the article, but it is just that I sometimes despair over why the IT industry continually re-discovers and re-invents what has been around for years.  Surely people can study a little history occasionally?

And surely it should be the other way around – business architecture (if there is such a thing!) determines enterprise architecture.

No Comments

Microsoft Word Shortcut Keys

 
In the Customize Keyboard dialog, find FileProperties under All Commands and assign a shortcut of Shift-Ctrl-Alt-P

 

Alt +f           Opens the Office Button

Alt +e           Opens the Prepare options

Alt +p           Opens the Properties
ALT+F, T         Open Word Options

W2010: Alt, F, I, Q, P    Show All Properties
W2010: Alt, F, I, Q, S    Properties

Keyboard shortcuts for Microsoft Word (from Microsoft): https://support.microsoft.com/en-us/kb/290938
KeyRocket: https://www.veodin.com/keyrocket/word-2010-shortcuts/ https://www.veodin.com/keyrocket/word-2007-shortcuts/
MVPS Word: http://word.mvps.org/faqs/general/shortcuts.htm
Shortcut World: http://www.shortcutworld.com/en/win/Word_2013.html

 

No Comments

Superfish BlooperFish ZedFish DeadFish

I got myself the Superfish malware. Really adware rather than pure malware, but really annoying (in a sort of minimalistic way).

For those that have not experienced it, Superwish is an advertising popup that mysteriously appears over the top of various words (keywords) in the web page that you are viewing. It shows some products that you may be interested in buying, based on the keyword (I presume). It is pretty easy to get rid of once it has popped up – just click on the cross at the top right and it is gone. It does not seem to do anything else malevolent. So, in the big scheme of things, not that bad. The only issue is that it is really ANNOYING – having your web page viewing disturbed by all this extraneous pictorial material appearing at random places on the screen. Just plain annoying – a bit like a person that constantly interrupts you when you are talking. Eventually you get so sick of it that you just want them to go away. Same with Superfish.

I did some googling around to see how to get rid of it. Most of the advice did not really work. You know, the basic advice – download this adware remover, use Malwarebytes Antimalware (which, by the way, is totally awesome – get yourself a copy and keep using it).

The only other advice that appeared to work was to look at all the extensions that you may have loaded in Google Chrome (sorry, that is the current browser I am using and where the problem occurred and was fixed – but the same process should work in a similar manner for other browsers), look at the Options for each of the enabled extensions, and see if they had something like “Enable similar product search powered by Superfish”. Note that the words may be different and that “Superfish” may not actually appear – but anything that similar product searches is what you are after.

Untick the option and save.

You should then probably disable and then re-enable the extension.

The extension that I found the problem in was “OSearch” from the Diigo people.

Other people found the problem in other extensions, such as MeasureIt.

The annoying thing is that there does not seem to be an automated method to get rid of Superfish, and that it appears in multiple different extensions – so you don’t really know where it is apart from laboriously going through extensions. Now, that is ANNOYING too (everything about Superfish annoys me now!!).

The final annoyance is that extension writers and software developers are putting nuisance adware in their products and not telling people when they do so, and when it comes to installing it. I know the stuff is free, but a bit of information presentation would assist in deciding to use or not. Maybe I am just complaining about the inevitable consequence of the free software world and should just accept it, except for the fact that it could be done better (and there is no reason not to change for the better).

Does anyone know of a fully automated method of getting rid of Superfish?

No Comments

This is the Future for Healthcare

 

http://www.forbes.com/sites/bruceupbin/2013/02/08/ibms-watson-gets-its-first-piece-of-business-in-healthcare/

IBM‘s Watson, the Jeopardy!-playing supercomputer that scored one for Team Robot Overlord two years ago, just put out its shingle as a doctor or, more specifically, as a combination lung cancer specialist and expert in the arcane branch of health insurance known as utilization management.  Thanks to a business partnership among IBM, Memorial Sloan-Kettering and WellPoint, health care providers will now be able to tap Watson’s expertise in deciding how to treat patients.

Pricing was not disclosed, but hospitals and health care networks who sign up will be able to buy or rent Watson’s advice from the cloud or their own server. Over the past two years, IBM’s researchers have shrunk Watson from the size of a master bedroom to a pizza-box-sized server that can fit in any data center. And they improved its processing speed by 240%. Now what was once was a fun computer-science experiment in natural language processing is becoming a real business for IBM and Wellpoint, which is the exclusive reseller of the technology for now. Initial customers include WestMed Practice Partners and the Maine Center for Cancer Medicine & Blood Disorders.

Even before the Jeopardy! success, IBM began to hatch bigger plans for Watson and there are few areas more in need of supercharged decision-support than health care. Doctors and nurses are drowning in information with new research, genetic data, treatments and procedures popping up daily. They often don’t know what to do, and are guessing as well as they can. WellPoint’s chief medical officer Samuel Nussbaum said at the press event today that health care pros make accurate treatment decisions in lung cancer cases only 50% of the time (a shocker to me). Watson has shown the capability (on the utilization management side) of being accurate in its decisions 90% of the time, but is not near that level yet with cancer diagnoses. Patients, of course, need 100% accuracy, but making the leap from being right half the time to being right 9 out of ten times will be a huge boon for patient care. The best part is the potential for distributing the intelligence anywhere via the cloud, right at the point of care. This could be the most powerful tool we’ve seen to date for improving care and lowering everyone’s costs via standardization and reduced error. Chris Coburn, the Cleveland Clinic’s executive director for innovations, said at the event that he fully expects Watson to be widely deployed wherever the Clinic does business by 2020.

Watson has made huge strides in its medical prowess in two short years. In May 2011 IBM had already trained Watson to have the knowledge of a second-year medical student. In March 2012 IBM struck a deal with Memorial Sloan Kettering to ingest and analyze tens of thousands of the renowned cancer center’s patient records and histories, as well as all the publicly available clinical research it can get its hard drives on. Today Watson has analyzed 605,000 pieces of medical evidence, 2 million pages of text, 25,000 training cases and had the assist of 14,700 clinician hours fine-tuning its decision accuracy. Six “instances” of Watson have already been installed in the last 12 months.

Watson doesn’t tell a doctor what to do, it provides several options with degrees of confidence for each, along with the supporting evidence it used to arrive at the optimal treatment. Doctors can enter on an iPad a new bit of information in plain text, such as “my patient has blood in her phlegm,” and Watson within half a minute will come back with an entirely different drug regimen that suits the individual. IBM Watson’s business chief Manoj Saxena says that 90% of nurses in the field who use Watson now follow its guidance.

WellPoint will be using the system internally for its nurses and clinicians who handle utilization management, the process by which health insurers determine which treatments are fair, appropriate and efficient and, in turn, what it will cover. The company will also make the intelligence available as a Web portal to other providers as its Interactive Care Reviewer. It is targeting 1,600 providers by the end of 2013 and will split the revenue with IBM. Terms were undisclosed.

http://www.forbes.com/sites/bruceupbin/2013/02/08/ibms-watson-gets-its-first-piece-of-business-in-healthcare/

 

No Comments

Passwords

Constant bane of our computing life, but here are some resources:

Update (25 June 2017): Comparitech added.

A little commentary …

Security of a password when it comes to brute force hacking of passwords depends very much on the number of characters used in the password, since the number of combinations to be tried increases exponentially (well, not precisely exponentially in pure mathematical terms, but you know what I mean from a layman’s perspective. The maths would bore you) as more characters are added.

The common rules applied to make a password 8 characters (or at least more than 6) with at least one upper case, one digit and one special character are well and good, but really are not super-effective against a concerted hack, especially if one uses a “standard” type of password. What do I mean by a “standard” type of password? Well, in many instances, people use a word, with a capital as the first letter, then a special character (sometimes optional) then 1 or 2 digits at the end. The whole string is only 8 to 10 characters in length maximum. Knowing this type of behaviour, a hacker will apply some heuristics in their crack-search, to vastly reduce the amount of time to crack the password.

Thus, the first site above suggests that a 6 character only password with 1 capital letter could be cracked in 2 seconds and a 2 digit number could be cracked in 10 nanoseconds. If you had an 8 character password with 6 letters beginning with 1 capital and then 2 digits at the end, the site suggests it would take 6 hours to crack it (brute force approach), whereas cracking each independently and then trying all combinations of the two sets being brute-forced independently as one went would theoretically only take 20 seconds (hopefully my maths is correct).

Say adding the extra complexity of a special character just before the 2 digits would add an extra 2 nanoseconds for the special character and take 40 seconds for the whole lot to be cracked.

The same would apply if one turned the digits and special characters around (digits first, then special character then letters).

Furthermore, using a word search through a known dictionary even further reduces the number of permutations to search and the time to crack (very substantially).

The following snippet from a Wikipedia article on the subject is very informative:

Human-generated passwords

People are notoriously poor at achieving sufficient entropy to produce satisfactory passwords. According to one study involving half a million users, the average password entropy was estimated at 40.54 bits.[8] Some stage magicians exploit this inability for amusement, in a minor way, by divining supposed random choices (of numbers, say) made by audience members.

Thus, in one analysis of over 3 million eight-character passwords, the letter “e” was used over 1.5 million times, while the letter “f” was used only 250,000 times. A uniform distribution would have had each character being used about 900,000 times. The most common number used is “1”, whereas the most common letters are a, e, o, and r.[9]

Users rarely make full use of larger character sets in forming passwords. For example, hacking results obtained from a MySpace phishing scheme in 2006 revealed 34,000 passwords, of which only 8.3% used mixed case, numbers, and symbols.[10]

The full strength associated with using the entire ASCII character set (numerals, mixed case letters and special characters) is only achieved if each possible password is equally likely. This seems to suggest that all passwords must contain characters from each of several character classes, perhaps upper and lower case letters, numbers, and non-alphanumeric characters. In fact, such a requirement is a pattern in password choice and can be expected to reduce an attacker’s “work factor” (in Claude Shannon’s terms). This is a reduction in password “strength”. A better requirement would be to require a password NOT to contain any word in an online dictionary, or list of names, or any license plate pattern from any state (in the US) or country (as in the EU). In fact if patterned choices are required, humans are likely to use them in predictable ways, such a capitalizing a letter, adding one or two numbers, and a special character. If the numbers and special character are added in predictable ways, say at the beginning and end of the password,[11] they could even lower password strength compared to an all-letter, randomly selected, password of the same length.

The take-away: read all the Wikipedia articles on passwords and cracking etc, and follow the latest advice. At the moment, it appears to be to use a longer phrase (the more characters the better) and insert special characters and digits in random places in the phrase, if possible (not just at the front and the end).

 

No Comments

Conversation as Ephemera

Now, I find this concept rather interesting: http://bokardo.com/archives/calling-snapchat-the-sexting-app-misses-a-huge-shift-in-mobile-photos-and-communication/

The basic premise is that, in the past, engaging in a conversation was totally ephemeral – it was NOT recorded for posterity, unless someone specifically wanted it to be recorded.  Hence, laws regulating taking secret recordings – people have to give explicit permission to be recorded.  Many computer products want to record everything for all eternity – thereby potentially leading to embarrassing or legally difficult situations in the future (those awful pictures of one when drunk, or the inappropriate status update or comment, which are available for everyone to see forever on Facebook).  But this is not what is necessarily intended by the participants, nor desired and maybe we should be moving back to ephemeral events (conversations, statements, comments, photos, etc) and only explicitly recording some as worthy of posterity.

 

No Comments

10 Technology Trends for 2013

Around this time of the year (ie New Year’s), people do like predicting what is going to happen in the forthcoming year.  Apart from the obvious “game” (chortling at the predictions from years gone by which have nowhere near come true), there still is usefulness associated with both preparing as well as reviewing predictions for the future.  They do assist in focussing one’s thoughts, thinking through issues of importance, identifying directions and informing action.

So, in this spirit, Lunarpages (who are great hosting providers, in the same league as Dreamhost, also great hosting providers), offer their 10 Technology Trends for 2013.  Enjoy!

I could not just leave it with the Lunarpages futures.  Here are a couple of extras …

  • A “liquid” screen which raises real buttons/keys when required (and they disappear when not required for typing), from Tactus Technology (see TechCrunch article);

 

Read the rest of this entry »

No Comments

What’s in store in ’13

A few resources from a variety of sources about what? Maybe the future, maybe not!

No Comments

A Guide to Implementing Cloud Services

On the 19th September 2012, the Australian Government Information Management Office (AGIMO), as part of the Australian Government Department of Finance and Deregulation, released a paper entitled “A Guide to Implementing Cloud Services“, in PDF and DOC forms.

Overall, a pretty reasonable document, providing some practical guidance on going about implementing cloud based services within an organisation.  Rather obviously, it is written mostly from the perspective of, and to be used by, government agencies but anyone could effectively use this document (with appropriate modification as one sees fit).  In particular, it provides a reasonable set (but not overly exhaustive) set of checklists to address various aspects of effective implementation and use of cloud services.

 

No Comments

Zuhandenheit and the Cloud UI

Derek Singleton from Software Advice recently wrote a blog article postulating that maybe it was now time for a set of standards associated with the user interface for cloud applications to be formulated. I think he was mainly referring to business applications in the cloud – you know, ERP systems, CRM systems, etc.

Reasonable enough concept, and easy to understand within the frame of retrospective analysis of technological (and, in many cases, social) advances. Standardisation typically allows one to conveniently abstract out (sometimes complex) details of particular technologies, such that one can concentrate on the task at hand, rather than having to focus on solving a foundational issue for every case. Standards allow for rapid incremental progress.

The ICT world is replete with examples of standardisation assisting in such a manner. Indeed, one could argue that most of ICT only works because of such an approach. I am writing this on a tablet computer, sitting outside in the early morning, experiencing one of the most beautiful environments in the world. The tall straight trees crowned in a verdant canopy juxtaposed against the bluest of cloudless blue skies stretching forever would bring joy to any soul in any situation. (Why, then, you should ask, would I be writing about Cloud UI standards when presented with such a sight? Well might you ask. Because there is no answer to such an antithetical query. Let us continue).

I can concentrate on thinking and writing what you are now reading because I do not need to think about how the buttons on the keyboard connected to the screen are translated into electrical signals, which are then interpreted by an operating system (in simple terms – I don’t want to explain everything about every little element of computers) into characters for input into a program, which will then issue instructions to display said characters on a screen, not worrying about how shapes are pixelated to create a readable representation for me to understand and continue writing from.

Not only do I not need to worry about such issues, but the creators of the program I am currently using (Evernote – fast becoming a “must-have” on every portable computing device in the world) also did not need to worry about such issues. The abstraction of the operating system from base hardware, according to various standards (de facto and de jure – in a sense) meant that they could concentrate on implementing a wonderful information management tool. Further than that, a standard approach to certain elements of use of the operating system meant that they could implement their program relatively straight-forwardly (I am not saying it might have been easy) on a number of operating systems.

Such capabilities would have been unthinkable fifty years ago (in ICT terms). I am also able to take advantage of a WiFi connection to sit outside and enjoy the world around me (looking up when the kookaburra call intrudes, to once more savour the sight), in order to record information I have typed through the internet on a server somewhere in the world (as well as on my tablet). Do I need to know about channels and frequencies, IP addresses and DHCP, routers and relays? I think not. Thank goodness I don’t. If I had to deal with any of these issues everytime I had to write something, nothing would ever get written. Ditto for the Evernote programmers. Ditto for the Android programmers. Ditto for providers of the cloud service running the data storage service. Ditto for almost everyone marginally involved in the vast network of relationships of functional delivery of this technological world, apart from a small number of people who must actually create or fix such networking software and hardware.

This is standardisation at work. And it works. Much more than it doesn’t work. And it creates and creates and creates. It applies not only to ICT, but to engineering generally, science generally, any technological endeavour generally, human centred and planned systems generally, and even throughtout society. If one wanted to wax philosophical, there could be some deep reasons behind all this.

Which indeed there appears to be. Through the work of Martin Heidegger (1889-1976), one of the most pre-eminent philosophers of the 20th century. Do yourself a favour (to channel a minor celebrity). Read up on the works of Martin Heidegger, even if you don’t read his actual works (he is a little hard to get into – not the least due to the very specific terminology that he uses, relying as it does on the original German). Heidegger attacked questions of the essentialness of Being, what it is to exist. Within the world in which we live.

His work, thus, addresses issues of how we, as humans, operate within the world, how we interact with things and other entities. This has applicability not only to general human behaviour, but also to computing – how we use and interact with computer systems (as artefacts in the world). This thesis was developed and written about by Fernando Flores and Terry Winograd in their seminal book “Understanding Computers and Cognition”. They described the Heideggerian concept of Zuhandenheit (“Ready-at-Hand”) in computing terms. In essence, ready-at-hand (or readiness-at-hand) is the concept of how some artefact / entity becomes part of our world in such a manner that one does not need to think at all in order to use the artefact. It’s place in the world (which includes how it is used) is at all times “ready” for us, to use (or relate to). There is no necessity for one to remove oneself from the world situation one is currently in, in order to deal with the artefact. It is only when the entity changes from being Zuhandenheit to Vorhandenheit (meaning “Present-at-Hand”) that is comes into the foreground of one’s attention, and must be dealt with consciously in some manner.

The famour example used is that of a hammer. In normal circumstances, for most people bought up in a civilised Western tradition (the context is rather important for the concept of Zuhandenheit), a hammer is “ready-at-hand”. One simply picks it up and hammers, without needing to consciously think about how to use the hammer. It is only when the hammer is broken in some manner that it becomes “present-at-hand”, when the item is now “present” in front of one, requiring one’s attention – in order to fix it, or work out how to use it in it’s broken state.

Present-at-hand means that one is not concentrating on the task required or desired, but rather, must focus attention and consciousness onto the entity which is present-at-hand, determining what to do with it and how to use it, possiby fixing it, before being able to then re-focus on the task-at-hand.

Present-at-hand is a necessity in many situations, particularly novel circumstances, but is positively detrimental if it becomes overwhelming. Items present-at-hand must fade into reasy-at-hand to be able to successfully navigate the complex and chaotic world one lives and works in.

This concept of Zuhandenheit applies directly to computing. One could argue that the most effective computing is one that is fully ready-at-hand. No need to think, simply access the computing facility and it is done for one. It is the dream of the artificial intelligence community (for many of them). It is represented in science fiction – such as the computer (with the voice of Majel Barrett) in Star Trek. Always there, simply need to talk to it, perfectly divines one’s needs and intentions, and then executes without error. On the bridge of the Enterprise, there is never a need to get the manual out to work out which menu item hidden five deep in a dialogue box, which arcane key combination, which parameter in which format for which API call one needs to know, understand and apply to get something done. If that was the case, the Klingons would have long ruled the Empire (to mix filmic metaphors a touch).

Of course, the current state of the art is light-years removed from the science fiction of Star Trek and other futuristic visions. But the basics of the concept apply to everyday use of computers. It would be a tedious working life if every time one had to type a memo, one had to look up help or read a manual or ask for assistance to perform the simplest of activities – to underline a phrase, or to indent a paragraph, or edit a mis-typed word. Work would barely be finished if every document was an arduous “hunt and peck” on the keyboard.

For most of today’s office workers, the QWERTY keyboard is ready-at-hand. One does need to think to use the keyboard (even if one is looking at the keys to ensure that the fingers are in the correct place). One can concentrate on what is to be said rather than where on earth is that “!” key. The readiness to hand only needs to marginally break for the level of frustration and problems to become apparent with the present-at-hand. If you were bought up in North America, England, Australia, etc, have you ever tried typing on a German keyboard, or a Spanish keyboard, etc? Have you gone to hit the @ key for an email address and found it is a completely different character? Now where is that @ key? I simply want to type in my email address, which typically takes all of one second, yet here I am desperately scanning every key on this keyboard to find a hidden symbol. Finally, after trying two or three other keys, there it is. Thank goodness. Only to next be snookered because the Z key is now somewhere else as well.

But being ready-to-hand does not necessarily mean the best or most efficient (or effective). Over the years, many people have contended that a Dvorak keyboard is a much better keyboard layout to use, from a speed, accuracy of typing, and ergonomic perspective. But, it has never caught on. Why? Mostly because of the weight of “readiness”, the heaviness of “being”, which is the qwerty layout.

People develop what is loosely called “muscle memory”. Their fingers know where to go, just through the muscles alone – they do not need to think about where their fingers need to be in order to type a word – the muscles in the fingers automatically move to the correct spots. No thinking required. All the mental processing capacity in one’s brain can be applied to what is being written rather than the typng itself (note the ready-to-hand aspect). It is just too much effort to re-train one’s muscles, with little perceived benefit. And if all the consumer population “profess” to wanting/using a qwerty keyboard, then all the vendors will supply a qwerty keyboard. And if the only keyboards commonly available are qwerty keyboards, then people will only know how to use a qwerty keyboard, their muscles will be trained, and they will only want to use a qwerty keyboard. And thus the cycle repeats on itself. Self reinforcing. Qwerty keyboard it is, from now until eternity.

Unless, there is a disruption or discontinuity. So, if there is no keyboard, how will one type? Or if there are only a limited number of keys, how does one enter text from the full alphabet? Enter the world of mobile phones, and then smartphones. With only a limited number of numeric keys, text can be entered using innovative software (ie predictive text). When first attempted, this situation is blatantly “present-at-hand”. It takes some time to get used to the new way of entering text/typing. But for most people, some continued practice whilst fully present-at-hand soon leads to such typing being ready-at-hand (although maybe not as effective or efficient as a full sized qwerty keyboard).

But, be aware, not all people make the transition from the present-at-hand to the ready-at-hand. Some people just can not use predictive text entry – they revert to previous methods of phone use, or no texting at all.

What happens if there are not even numeric keys. Thus, the touch sensitive smartphone (or tablet) can emulate a qwerty keyboard (or slightly modified qwerty keyboard) but the reduction in efficacy of the touchpad qwerty keyboard makes it rather less ready-at-hand (each time a character is not typed correctly, or multiple keystrokes have to be executed to enter a single character (such as when entering some of the special characters using the standard iPhone onscreen keyboard), or a word is auto-corrected incorrectly, then the artefact is “broken” – a little like the broken hammer, and it has to be used specially, now “present-at-hand” in order to achieve the end result) and thus susceptible to allowing one to learn an alternative process of entering text (a new present-at-hand leading, possibly, to being ready-at-hand) which may be more efficient. The barriers to learning anew are reduced, allowing the learning to be attempted, and the new thereby mastered.

Such thinking tends to be somewhat embedded in design culture. Take, for example, this excerpt from an article in Infoworld on the use of iPads in SAP:

“Bussmann is a big fan of design thinking, as he says the mobile experience is different even for things that can be done on a PC. He notes that people seem to grok and explore data more on an iPad than a PC, even when it’s the same Web service underlying it. He notes that PCs tend to be used for deep exploration, while a tablet is used for snapshot trend analysis. He compares email usage on a PC to email usage on a BlackBerry, the latter being quicker and more of the moment. If mobile apps were designed explicitly for the different mentality used on a tablet, Bussmann believes that the benefits of using iPads and other tablets would be even stronger.”

The point is that different artefacts have different contexts, leading to different modes of operation (new elements are ‘ready-to-hand’ which allow new modes of thinking and operation).

Good desgn (in a computing context) is all about readiness-to-hand. How to design and implement a facility which is useful without requiring excess effort. How to design something that works without undue learning and constant application of thought.

What does this mean in relation to UI standards?

Such standards are an attempt to generate a readiness-to-hand. If every keystroke means the same thing, across all applications, then “muscle memory” can kick in, and one need not bother learning certain aspects of the application, but rather, can commence using it, or use various (new) features, without undue effort.

A simple example. In many applications under MS Windows, the F2 key “edits” the currently selected item, say a filename within a folder, allowing one to change the name of a file. When working with many documents and files on a continuous basis, it has become almost instinctive for the second littlest fnger on the left hand to reach for and hit the F2 key, then type in the new name for the file. When hitting the F2 key does not result in allowing the filename to be changed, but rather, results in some other action, frustration soon sets in (and by soon, one means explosively and immediately).

The next question is – which key renames a filename? None of the current F keys. Access “Help”. Find that it is Shift-F6. Now, who would have thought that? Why pick Shift-F6? Why not Ctrl-Alt-Shift-F12? Or any other key combination. I am sure that the author of the software had a good reason to use Shift-F6 for file rename (indeed, if one carefully reviews all the key combinations for functions in the program, one can see some logic relating to the assignments of keys. Further, in some deep recess of memory, I recollect that the key assignments may relate to use with another operating system, in another conext, according to another (so-called) “standard”. None of which is much help when one is suddenly thrust from readiness-at-hand into present-at-hand and spends valuable time trying to achieve a very simple operation.

Fortunately, the author of the program wrote his code such that every function could be mapped to a different key (and vice-a-versa obviously). This means that I can re-map the F2 key to perform a file rename. A bit of a pain when it has to be done on every computer that the program is used on (the trauma and destructiveness of the PC-centric world – which, one day, the “cloud”-world will fix), but the benefit of having File-Rename ready-to-hand outweighs the cost of having to perform a key re-mapping once in a while (for me at least).

Unfortunately, other people may not be quite as “techno-savvy” as I am, not realising that it is possible to re-map the function keys. Or, programs are not coded in such a flexible manner, such that function keys can not be re-mapped within the program. Thus, people are “forced” to endure a facility that is plainly not ready-at-hand – and waste hours of valuable time, building levels of frustration that can never be relieved. It is no wonder that many people find dealing with computers difficult and “non-intuitive” (a nebulous ill-defined term that I normally eschew, in favour of the more technical and proper philosophical and psychological terms which may apply – but used here as a reference to the nebulous ill-at-ease feeling that people dealing with such non-ready-at-hand computing describe as being “non-intuitive”), and that people, once they have learnt one program or system etc are loath to learn another, and are slow to embrace additional capability or functionality within the program or system that they (purportedly) know.

Or, in another twist on not being ready-at-hand, the program (or operating system, or other facility) may simply not implement the desired capability. Thus, on the tablet I am currently using to type this exploration, there are no keystrokes to move back (or forward) a single word, even though the tablet has a keyboard attached (which is not too bad, but does miss keystrokes on a regular basis – thus making it also non-ready-at-hand in an irritatingly pseudo-random manner), and that keyboard has arrow keys and a control key on it. Using the Android operating system and Evernote (and other programs), there is no simply quick keyboard oriented manner to go back many characters to fix a typo (refer to the previous problem with the keyboard missing characters, necessitating the need to quickly reposition the cursor to fix the problem). Thank goodness that the operating system / program implemented the “End” key, to quickly go to the end of the line. Under MS Windows, Linux and other operating systems, and the major word editing programs on those systems, the Ctrl-Left Arrow combination to go back one word is typically always implemented. When it is isn’t, it is sorely missed.

No doubt, the developers of the Android tablet based systems are working with the paradigm that most of the navigation will be performed using the touchscreen capabilities – and therefore that will suffice for moving back a word or so. Little do they realise that forcing someone to take their fingers off the keyboard and try to place them on a screen, in a precise location is violently yanking them from being fully ready-at-hand (typing, typing, typing as quickly as they can think) into a difficult present-at-hand situation (have you ever tried to very precisely position a cursor in smallish text on a partially moving screen with pudgy fingertips?).

Thus, one of the issues for readiness-to-hand and computing relates to the context in which an element is meant to be ready-at-hand. Mix contexts, take an element out of one context and place into another, and suddenly what works well in one place does not work so well anymore.

Which leads back to the question of standards for UIs. Such standards will need to be thought through very carefully – lest they lead to further “presentness”, and not the desired effect of “readiness”. This is predicated, obviously, on the proposition that the main reason for promulgating and adopting standards is to effect a greater reasy-at-hand utility, across the elements of one’s computing usage. The standards are meant to enhance the situation whereby knowledge gained in the use of one system, program or facility can be readily transferred into the use of another system, program or facility.

So, promulgating a standard that “theoretically” works well given one context may actually lead to deterioration in readiness-at-hand within another context. This may be at the micro level – for instance, a standard for certain key-mappings which works well within a browser based environment on a PC may be completely ineffective on a smartphone or tablet. And even detrimental if the alternatives to using key are not well thought through on the other platforms.

The “breakage” may be at a more macro level, whereby standards relating to screen layout and menu’s and presentation elements may be completely inappropriate and positively detrimental when the program requires a vastly different UI approach in order to maximise its facility (efficacy and efficiency). Thus, such screen presentation standards for an ERP suite may be wholly inappropriate for the business intelligence aspects of that ERP suite.

Or, to put it another way, one may implement a BI toolset within the ERP suite that conforms to the prevailing UI standard. Its readiness-at-hand will be sub-optimal. But the hegemony of the “standard” will prevent efforts at improvements to the BI UI that do not meet that standard, thus stagnating innovation, but more importantly, consigning those using the system to additional and wasted effort, as well as meaning that such a system will not be used, or not used to the extent that it could or should. I am sure that many readers have numerous examples relating to implementing systems, which on paper, tick all the boxes, seem to address all criteria, yet are simply not used by those in the organisation that sch systems are intended for (or used grudgingly and badly). One of my favourite tropes in this area are corporate document management systems. Intended for the best possible reasons, have the weight of corporate “goodness” behind them (just like white bread), yet never seem to deliver on promises, are continually subverted, and blatantly fail the readiness-at-hand test, both for material going in and for retrieving material later. Unfortunately (or fortunately for this missive), that is a story for another day.

When faced with the intransigence of a “standard”, either de facto or de jure, it typically takes a paradigm shift in some other aspect of the environment to allow a similar shift with respect to the “standard”. Thus, in recent times, it was the shift in terms of keyboard-less touchscreen smartphones and tablets which required a rethink of the predominant operating system UI. For a variety of reasons, that UI (or, let us say, UI concept) quickly came to prominence and predominance (debateable but defensible if one considers that Apple, as the main purveyor of such UIs, has the greatest capitalisation of any company in the world at the moment. Not directly relating to the UI (ie there is no direct causal connection), but with the UI no doubt making a contribution to the success of the offerings).

It took that paradigm shift across a range of factors in the “world-scene” of computing to make such large changes to the UI standards. The incumbent determinant of the standard (in this instance, let us say, Microsoft Windows) would not make such large-scale changes. Their intertia, and the inertia of those using the system, would not allow large-scale changes to be made. Not until faced with the overwhelming success of an alternative that such an alternative was belatedly added to the UI for Windows (well, in what could graciously be called an embodiment of the new alternative UI, if that is what one so desires).

This is not to say that such intertia, or the definition of a “standard” (de jure or de facto) is necessarily a bad thing. Making too many changes too quickly, in any environment, leads much to be “broken” into present-at-hand, requiring too much effort simply to achieve day-to-day operation. Destruction of readiness-at-hand becomes so unsettling that little effective is achieved. Herein lies the rub with “change” within an organisation. Change is a necessity (change is simply another name for living and thrving. If one’s body never changed, one would be dead. Think of the biology – the deep biology), but any change means at least something becomes present-at-hand, at least for a moment, until it is processed and absorbed into being ready-at-hand. The secret to success at change management is minimising this disruption from ready-at-hand to present-at-hand to ready-at-hand again.

“Inertia” is the means whereby learned behaviours are most efficiently exploited to the greatest effect – PROVIDED that the environment or context has not changed. Such a change necessitates a re-appraisal, since, as per the definition above, something ready-at-hand in one context or environment will not be so in another (almost by definition).

Thus, a standard for the UI of Cloud based applications, principally ERP applications (and those applications orbiting within the ERP “solar system”) is a “GOOD THING” – given that the contextualisation of those standards is well thought through – well defined, well documented and very applicable, and that the awareness is always pre-eminent that such standards may retard innovation, and means whereby use of such standards can co-exist (in some manner, and it is yet rather unclear how this could actually work in practice) with a new or different set of UI for different purposes.

, , , , ,

No Comments