Archive for category Cloud

10 Technology Trends for 2013

Around this time of the year (ie New Year’s), people do like predicting what is going to happen in the forthcoming year.  Apart from the obvious “game” (chortling at the predictions from years gone by which have nowhere near come true), there still is usefulness associated with both preparing as well as reviewing predictions for the future.  They do assist in focussing one’s thoughts, thinking through issues of importance, identifying directions and informing action.

So, in this spirit, Lunarpages (who are great hosting providers, in the same league as Dreamhost, also great hosting providers), offer their 10 Technology Trends for 2013.  Enjoy!

I could not just leave it with the Lunarpages futures.  Here are a couple of extras …

  • A “liquid” screen which raises real buttons/keys when required (and they disappear when not required for typing), from Tactus Technology (see TechCrunch article);

 

Read the rest of this entry »

No Comments

A Guide to Implementing Cloud Services

On the 19th September 2012, the Australian Government Information Management Office (AGIMO), as part of the Australian Government Department of Finance and Deregulation, released a paper entitled “A Guide to Implementing Cloud Services“, in PDF and DOC forms.

Overall, a pretty reasonable document, providing some practical guidance on going about implementing cloud based services within an organisation.  Rather obviously, it is written mostly from the perspective of, and to be used by, government agencies but anyone could effectively use this document (with appropriate modification as one sees fit).  In particular, it provides a reasonable set (but not overly exhaustive) set of checklists to address various aspects of effective implementation and use of cloud services.

 

No Comments

Zuhandenheit and the Cloud UI

Derek Singleton from Software Advice recently wrote a blog article postulating that maybe it was now time for a set of standards associated with the user interface for cloud applications to be formulated. I think he was mainly referring to business applications in the cloud – you know, ERP systems, CRM systems, etc.

Reasonable enough concept, and easy to understand within the frame of retrospective analysis of technological (and, in many cases, social) advances. Standardisation typically allows one to conveniently abstract out (sometimes complex) details of particular technologies, such that one can concentrate on the task at hand, rather than having to focus on solving a foundational issue for every case. Standards allow for rapid incremental progress.

The ICT world is replete with examples of standardisation assisting in such a manner. Indeed, one could argue that most of ICT only works because of such an approach. I am writing this on a tablet computer, sitting outside in the early morning, experiencing one of the most beautiful environments in the world. The tall straight trees crowned in a verdant canopy juxtaposed against the bluest of cloudless blue skies stretching forever would bring joy to any soul in any situation. (Why, then, you should ask, would I be writing about Cloud UI standards when presented with such a sight? Well might you ask. Because there is no answer to such an antithetical query. Let us continue).

I can concentrate on thinking and writing what you are now reading because I do not need to think about how the buttons on the keyboard connected to the screen are translated into electrical signals, which are then interpreted by an operating system (in simple terms – I don’t want to explain everything about every little element of computers) into characters for input into a program, which will then issue instructions to display said characters on a screen, not worrying about how shapes are pixelated to create a readable representation for me to understand and continue writing from.

Not only do I not need to worry about such issues, but the creators of the program I am currently using (Evernote – fast becoming a “must-have” on every portable computing device in the world) also did not need to worry about such issues. The abstraction of the operating system from base hardware, according to various standards (de facto and de jure – in a sense) meant that they could concentrate on implementing a wonderful information management tool. Further than that, a standard approach to certain elements of use of the operating system meant that they could implement their program relatively straight-forwardly (I am not saying it might have been easy) on a number of operating systems.

Such capabilities would have been unthinkable fifty years ago (in ICT terms). I am also able to take advantage of a WiFi connection to sit outside and enjoy the world around me (looking up when the kookaburra call intrudes, to once more savour the sight), in order to record information I have typed through the internet on a server somewhere in the world (as well as on my tablet). Do I need to know about channels and frequencies, IP addresses and DHCP, routers and relays? I think not. Thank goodness I don’t. If I had to deal with any of these issues everytime I had to write something, nothing would ever get written. Ditto for the Evernote programmers. Ditto for the Android programmers. Ditto for providers of the cloud service running the data storage service. Ditto for almost everyone marginally involved in the vast network of relationships of functional delivery of this technological world, apart from a small number of people who must actually create or fix such networking software and hardware.

This is standardisation at work. And it works. Much more than it doesn’t work. And it creates and creates and creates. It applies not only to ICT, but to engineering generally, science generally, any technological endeavour generally, human centred and planned systems generally, and even throughtout society. If one wanted to wax philosophical, there could be some deep reasons behind all this.

Which indeed there appears to be. Through the work of Martin Heidegger (1889-1976), one of the most pre-eminent philosophers of the 20th century. Do yourself a favour (to channel a minor celebrity). Read up on the works of Martin Heidegger, even if you don’t read his actual works (he is a little hard to get into – not the least due to the very specific terminology that he uses, relying as it does on the original German). Heidegger attacked questions of the essentialness of Being, what it is to exist. Within the world in which we live.

His work, thus, addresses issues of how we, as humans, operate within the world, how we interact with things and other entities. This has applicability not only to general human behaviour, but also to computing – how we use and interact with computer systems (as artefacts in the world). This thesis was developed and written about by Fernando Flores and Terry Winograd in their seminal book “Understanding Computers and Cognition”. They described the Heideggerian concept of Zuhandenheit (“Ready-at-Hand”) in computing terms. In essence, ready-at-hand (or readiness-at-hand) is the concept of how some artefact / entity becomes part of our world in such a manner that one does not need to think at all in order to use the artefact. It’s place in the world (which includes how it is used) is at all times “ready” for us, to use (or relate to). There is no necessity for one to remove oneself from the world situation one is currently in, in order to deal with the artefact. It is only when the entity changes from being Zuhandenheit to Vorhandenheit (meaning “Present-at-Hand”) that is comes into the foreground of one’s attention, and must be dealt with consciously in some manner.

The famour example used is that of a hammer. In normal circumstances, for most people bought up in a civilised Western tradition (the context is rather important for the concept of Zuhandenheit), a hammer is “ready-at-hand”. One simply picks it up and hammers, without needing to consciously think about how to use the hammer. It is only when the hammer is broken in some manner that it becomes “present-at-hand”, when the item is now “present” in front of one, requiring one’s attention – in order to fix it, or work out how to use it in it’s broken state.

Present-at-hand means that one is not concentrating on the task required or desired, but rather, must focus attention and consciousness onto the entity which is present-at-hand, determining what to do with it and how to use it, possiby fixing it, before being able to then re-focus on the task-at-hand.

Present-at-hand is a necessity in many situations, particularly novel circumstances, but is positively detrimental if it becomes overwhelming. Items present-at-hand must fade into reasy-at-hand to be able to successfully navigate the complex and chaotic world one lives and works in.

This concept of Zuhandenheit applies directly to computing. One could argue that the most effective computing is one that is fully ready-at-hand. No need to think, simply access the computing facility and it is done for one. It is the dream of the artificial intelligence community (for many of them). It is represented in science fiction – such as the computer (with the voice of Majel Barrett) in Star Trek. Always there, simply need to talk to it, perfectly divines one’s needs and intentions, and then executes without error. On the bridge of the Enterprise, there is never a need to get the manual out to work out which menu item hidden five deep in a dialogue box, which arcane key combination, which parameter in which format for which API call one needs to know, understand and apply to get something done. If that was the case, the Klingons would have long ruled the Empire (to mix filmic metaphors a touch).

Of course, the current state of the art is light-years removed from the science fiction of Star Trek and other futuristic visions. But the basics of the concept apply to everyday use of computers. It would be a tedious working life if every time one had to type a memo, one had to look up help or read a manual or ask for assistance to perform the simplest of activities – to underline a phrase, or to indent a paragraph, or edit a mis-typed word. Work would barely be finished if every document was an arduous “hunt and peck” on the keyboard.

For most of today’s office workers, the QWERTY keyboard is ready-at-hand. One does need to think to use the keyboard (even if one is looking at the keys to ensure that the fingers are in the correct place). One can concentrate on what is to be said rather than where on earth is that “!” key. The readiness to hand only needs to marginally break for the level of frustration and problems to become apparent with the present-at-hand. If you were bought up in North America, England, Australia, etc, have you ever tried typing on a German keyboard, or a Spanish keyboard, etc? Have you gone to hit the @ key for an email address and found it is a completely different character? Now where is that @ key? I simply want to type in my email address, which typically takes all of one second, yet here I am desperately scanning every key on this keyboard to find a hidden symbol. Finally, after trying two or three other keys, there it is. Thank goodness. Only to next be snookered because the Z key is now somewhere else as well.

But being ready-to-hand does not necessarily mean the best or most efficient (or effective). Over the years, many people have contended that a Dvorak keyboard is a much better keyboard layout to use, from a speed, accuracy of typing, and ergonomic perspective. But, it has never caught on. Why? Mostly because of the weight of “readiness”, the heaviness of “being”, which is the qwerty layout.

People develop what is loosely called “muscle memory”. Their fingers know where to go, just through the muscles alone – they do not need to think about where their fingers need to be in order to type a word – the muscles in the fingers automatically move to the correct spots. No thinking required. All the mental processing capacity in one’s brain can be applied to what is being written rather than the typng itself (note the ready-to-hand aspect). It is just too much effort to re-train one’s muscles, with little perceived benefit. And if all the consumer population “profess” to wanting/using a qwerty keyboard, then all the vendors will supply a qwerty keyboard. And if the only keyboards commonly available are qwerty keyboards, then people will only know how to use a qwerty keyboard, their muscles will be trained, and they will only want to use a qwerty keyboard. And thus the cycle repeats on itself. Self reinforcing. Qwerty keyboard it is, from now until eternity.

Unless, there is a disruption or discontinuity. So, if there is no keyboard, how will one type? Or if there are only a limited number of keys, how does one enter text from the full alphabet? Enter the world of mobile phones, and then smartphones. With only a limited number of numeric keys, text can be entered using innovative software (ie predictive text). When first attempted, this situation is blatantly “present-at-hand”. It takes some time to get used to the new way of entering text/typing. But for most people, some continued practice whilst fully present-at-hand soon leads to such typing being ready-at-hand (although maybe not as effective or efficient as a full sized qwerty keyboard).

But, be aware, not all people make the transition from the present-at-hand to the ready-at-hand. Some people just can not use predictive text entry – they revert to previous methods of phone use, or no texting at all.

What happens if there are not even numeric keys. Thus, the touch sensitive smartphone (or tablet) can emulate a qwerty keyboard (or slightly modified qwerty keyboard) but the reduction in efficacy of the touchpad qwerty keyboard makes it rather less ready-at-hand (each time a character is not typed correctly, or multiple keystrokes have to be executed to enter a single character (such as when entering some of the special characters using the standard iPhone onscreen keyboard), or a word is auto-corrected incorrectly, then the artefact is “broken” – a little like the broken hammer, and it has to be used specially, now “present-at-hand” in order to achieve the end result) and thus susceptible to allowing one to learn an alternative process of entering text (a new present-at-hand leading, possibly, to being ready-at-hand) which may be more efficient. The barriers to learning anew are reduced, allowing the learning to be attempted, and the new thereby mastered.

Such thinking tends to be somewhat embedded in design culture. Take, for example, this excerpt from an article in Infoworld on the use of iPads in SAP:

“Bussmann is a big fan of design thinking, as he says the mobile experience is different even for things that can be done on a PC. He notes that people seem to grok and explore data more on an iPad than a PC, even when it’s the same Web service underlying it. He notes that PCs tend to be used for deep exploration, while a tablet is used for snapshot trend analysis. He compares email usage on a PC to email usage on a BlackBerry, the latter being quicker and more of the moment. If mobile apps were designed explicitly for the different mentality used on a tablet, Bussmann believes that the benefits of using iPads and other tablets would be even stronger.”

The point is that different artefacts have different contexts, leading to different modes of operation (new elements are ‘ready-to-hand’ which allow new modes of thinking and operation).

Good desgn (in a computing context) is all about readiness-to-hand. How to design and implement a facility which is useful without requiring excess effort. How to design something that works without undue learning and constant application of thought.

What does this mean in relation to UI standards?

Such standards are an attempt to generate a readiness-to-hand. If every keystroke means the same thing, across all applications, then “muscle memory” can kick in, and one need not bother learning certain aspects of the application, but rather, can commence using it, or use various (new) features, without undue effort.

A simple example. In many applications under MS Windows, the F2 key “edits” the currently selected item, say a filename within a folder, allowing one to change the name of a file. When working with many documents and files on a continuous basis, it has become almost instinctive for the second littlest fnger on the left hand to reach for and hit the F2 key, then type in the new name for the file. When hitting the F2 key does not result in allowing the filename to be changed, but rather, results in some other action, frustration soon sets in (and by soon, one means explosively and immediately).

The next question is – which key renames a filename? None of the current F keys. Access “Help”. Find that it is Shift-F6. Now, who would have thought that? Why pick Shift-F6? Why not Ctrl-Alt-Shift-F12? Or any other key combination. I am sure that the author of the software had a good reason to use Shift-F6 for file rename (indeed, if one carefully reviews all the key combinations for functions in the program, one can see some logic relating to the assignments of keys. Further, in some deep recess of memory, I recollect that the key assignments may relate to use with another operating system, in another conext, according to another (so-called) “standard”. None of which is much help when one is suddenly thrust from readiness-at-hand into present-at-hand and spends valuable time trying to achieve a very simple operation.

Fortunately, the author of the program wrote his code such that every function could be mapped to a different key (and vice-a-versa obviously). This means that I can re-map the F2 key to perform a file rename. A bit of a pain when it has to be done on every computer that the program is used on (the trauma and destructiveness of the PC-centric world – which, one day, the “cloud”-world will fix), but the benefit of having File-Rename ready-to-hand outweighs the cost of having to perform a key re-mapping once in a while (for me at least).

Unfortunately, other people may not be quite as “techno-savvy” as I am, not realising that it is possible to re-map the function keys. Or, programs are not coded in such a flexible manner, such that function keys can not be re-mapped within the program. Thus, people are “forced” to endure a facility that is plainly not ready-at-hand – and waste hours of valuable time, building levels of frustration that can never be relieved. It is no wonder that many people find dealing with computers difficult and “non-intuitive” (a nebulous ill-defined term that I normally eschew, in favour of the more technical and proper philosophical and psychological terms which may apply – but used here as a reference to the nebulous ill-at-ease feeling that people dealing with such non-ready-at-hand computing describe as being “non-intuitive”), and that people, once they have learnt one program or system etc are loath to learn another, and are slow to embrace additional capability or functionality within the program or system that they (purportedly) know.

Or, in another twist on not being ready-at-hand, the program (or operating system, or other facility) may simply not implement the desired capability. Thus, on the tablet I am currently using to type this exploration, there are no keystrokes to move back (or forward) a single word, even though the tablet has a keyboard attached (which is not too bad, but does miss keystrokes on a regular basis – thus making it also non-ready-at-hand in an irritatingly pseudo-random manner), and that keyboard has arrow keys and a control key on it. Using the Android operating system and Evernote (and other programs), there is no simply quick keyboard oriented manner to go back many characters to fix a typo (refer to the previous problem with the keyboard missing characters, necessitating the need to quickly reposition the cursor to fix the problem). Thank goodness that the operating system / program implemented the “End” key, to quickly go to the end of the line. Under MS Windows, Linux and other operating systems, and the major word editing programs on those systems, the Ctrl-Left Arrow combination to go back one word is typically always implemented. When it is isn’t, it is sorely missed.

No doubt, the developers of the Android tablet based systems are working with the paradigm that most of the navigation will be performed using the touchscreen capabilities – and therefore that will suffice for moving back a word or so. Little do they realise that forcing someone to take their fingers off the keyboard and try to place them on a screen, in a precise location is violently yanking them from being fully ready-at-hand (typing, typing, typing as quickly as they can think) into a difficult present-at-hand situation (have you ever tried to very precisely position a cursor in smallish text on a partially moving screen with pudgy fingertips?).

Thus, one of the issues for readiness-to-hand and computing relates to the context in which an element is meant to be ready-at-hand. Mix contexts, take an element out of one context and place into another, and suddenly what works well in one place does not work so well anymore.

Which leads back to the question of standards for UIs. Such standards will need to be thought through very carefully – lest they lead to further “presentness”, and not the desired effect of “readiness”. This is predicated, obviously, on the proposition that the main reason for promulgating and adopting standards is to effect a greater reasy-at-hand utility, across the elements of one’s computing usage. The standards are meant to enhance the situation whereby knowledge gained in the use of one system, program or facility can be readily transferred into the use of another system, program or facility.

So, promulgating a standard that “theoretically” works well given one context may actually lead to deterioration in readiness-at-hand within another context. This may be at the micro level – for instance, a standard for certain key-mappings which works well within a browser based environment on a PC may be completely ineffective on a smartphone or tablet. And even detrimental if the alternatives to using key are not well thought through on the other platforms.

The “breakage” may be at a more macro level, whereby standards relating to screen layout and menu’s and presentation elements may be completely inappropriate and positively detrimental when the program requires a vastly different UI approach in order to maximise its facility (efficacy and efficiency). Thus, such screen presentation standards for an ERP suite may be wholly inappropriate for the business intelligence aspects of that ERP suite.

Or, to put it another way, one may implement a BI toolset within the ERP suite that conforms to the prevailing UI standard. Its readiness-at-hand will be sub-optimal. But the hegemony of the “standard” will prevent efforts at improvements to the BI UI that do not meet that standard, thus stagnating innovation, but more importantly, consigning those using the system to additional and wasted effort, as well as meaning that such a system will not be used, or not used to the extent that it could or should. I am sure that many readers have numerous examples relating to implementing systems, which on paper, tick all the boxes, seem to address all criteria, yet are simply not used by those in the organisation that sch systems are intended for (or used grudgingly and badly). One of my favourite tropes in this area are corporate document management systems. Intended for the best possible reasons, have the weight of corporate “goodness” behind them (just like white bread), yet never seem to deliver on promises, are continually subverted, and blatantly fail the readiness-at-hand test, both for material going in and for retrieving material later. Unfortunately (or fortunately for this missive), that is a story for another day.

When faced with the intransigence of a “standard”, either de facto or de jure, it typically takes a paradigm shift in some other aspect of the environment to allow a similar shift with respect to the “standard”. Thus, in recent times, it was the shift in terms of keyboard-less touchscreen smartphones and tablets which required a rethink of the predominant operating system UI. For a variety of reasons, that UI (or, let us say, UI concept) quickly came to prominence and predominance (debateable but defensible if one considers that Apple, as the main purveyor of such UIs, has the greatest capitalisation of any company in the world at the moment. Not directly relating to the UI (ie there is no direct causal connection), but with the UI no doubt making a contribution to the success of the offerings).

It took that paradigm shift across a range of factors in the “world-scene” of computing to make such large changes to the UI standards. The incumbent determinant of the standard (in this instance, let us say, Microsoft Windows) would not make such large-scale changes. Their intertia, and the inertia of those using the system, would not allow large-scale changes to be made. Not until faced with the overwhelming success of an alternative that such an alternative was belatedly added to the UI for Windows (well, in what could graciously be called an embodiment of the new alternative UI, if that is what one so desires).

This is not to say that such intertia, or the definition of a “standard” (de jure or de facto) is necessarily a bad thing. Making too many changes too quickly, in any environment, leads much to be “broken” into present-at-hand, requiring too much effort simply to achieve day-to-day operation. Destruction of readiness-at-hand becomes so unsettling that little effective is achieved. Herein lies the rub with “change” within an organisation. Change is a necessity (change is simply another name for living and thrving. If one’s body never changed, one would be dead. Think of the biology – the deep biology), but any change means at least something becomes present-at-hand, at least for a moment, until it is processed and absorbed into being ready-at-hand. The secret to success at change management is minimising this disruption from ready-at-hand to present-at-hand to ready-at-hand again.

“Inertia” is the means whereby learned behaviours are most efficiently exploited to the greatest effect – PROVIDED that the environment or context has not changed. Such a change necessitates a re-appraisal, since, as per the definition above, something ready-at-hand in one context or environment will not be so in another (almost by definition).

Thus, a standard for the UI of Cloud based applications, principally ERP applications (and those applications orbiting within the ERP “solar system”) is a “GOOD THING” – given that the contextualisation of those standards is well thought through – well defined, well documented and very applicable, and that the awareness is always pre-eminent that such standards may retard innovation, and means whereby use of such standards can co-exist (in some manner, and it is yet rather unclear how this could actually work in practice) with a new or different set of UI for different purposes.

, , , , ,

No Comments

The Future of Computing (Now) – Cloud Computing

What is Cloud Computing?

In simple terms, cloud computing builds on the foundations of virtualised resources (compute resources, storage resources, network resources), providing an additional level of configuration and control across multiple virtual environments, as well as the capability of implementing “self service” facilities.

“Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

This cloud model promotes availability and is composed of:

  1. five essential characteristics:
    1. On-demand self-service;
    2. Broad network access;
    3. Resource pooling;
    4. Rapid elasticity;
    5. Measured Service;
  2. three service models:
    1. Cloud Software as a Service (SaaS);
    2. Cloud Platform as a Service (PaaS);
    3. Cloud Infrastructure as a Service (IaaS); and,
  3. four deployment models:
    1. Private cloud;
    2. Community cloud;
    3. Public cloud;
    4. Hybrid cloud.

Key enabling technologies include:

  1. fast wide-area networks;
  2. powerful, inexpensive server computers; and
  3. high-performance virtualization for commodity hardware.”

(Source: NIST)

“Cloud computing is a category of computing solutions in which a technology and/or service lets users access computing resources on demand, as needed, whether the resources are physical or virtual, dedicated, or shared, and no matter how they are accessed (via a direct connection, LAN, WAN, or the Internet). The cloud is often characterized by self-service interfaces that let customers acquire resources when needed as long as needed. Cloud is also the concept behind an approach to building IT services that takes advantage of the growing power of servers and virtualization technologies.”  (Source: IBM)

Cloud Computing is now one of the “hot topics” in ICT.  Almost all major vendors have some semblance of a cloud computing offering, however that may be defined (since, as with most “hot topics”, vendors and others define an amorphous term such as cloud computing in a manner which best suits their interests).

Other terminology is sometimes used in conjunction with (and sometimes, erroneously, synonymous with) cloud computing.  The terms SaaS (Software as a Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a Service) can all be considered as sub-variants of the more generic term “cloud computing”.  The diagram in this IBM introductory material further elucidates these differences.

 

Public, Private and Hybrid Clouds

“In general, a public (external) cloud is an environment that exists outside a company’s firewall. It can be a service offered by a third-party vendor. It could also be referred to as a shared or multi-tenanted, virtualized infrastructure managed by means of a self-service portal.

A private (internal) cloud reproduces the delivery models of a public cloud and does so behind a firewall for the exclusive benefit of an organization and its customers. The self-service management interface is still in place while the IT infrastructure resources being collected are internal.

In a hybrid cloud environment, external services are leveraged to extend or supplement an internal cloud.”  (Source: IBM)

Diagrammatically, the three (3) types of cloud computing offerings can be depicted as:

Cloud Computing Types

(Source: Sam Johnston)

Private Clouds

“Private clouds presents (sic) a shift from a model where everything is customized to one of standardization. Management in such an environment is no longer about avoiding change but instead embracing it to facilitate IT’s twin goals: delivering on the needs of the business and managing the underlying resources in the most efficient way possible.

The move to private cloud represents an industrial revolution for IT, applying industrial manufacturing techniques to the provisioning of IT services, gaining standardization and automation.

Standardization is central to achieving much greater operational efficiency.  Private clouds not only facilitate standardization but dramatically increase the returns on standardization. Deploying standard infrastructure from a templatized catalog of applications is orders of magnitude faster and easier than building each application from scratch. Similar gains are available from centralizing and standardizing high availability, network management, and security.

To take advantage of the cloud, there needs to be a clear separation of the production versus consumption layer. In the cloud, the consumer (the business) has no idea – and importantly, little interest in or concern with – what hardware platform and management tools are being used to deliver services.”  (Source: VMWare)

It should be noted that some commentators (for instance, Sam Johnston in his “Random rants about tech stuff (cloud computing, intellectual property, security, etc.)“) suggest that Private Clouds are a neologism to justify various vendors offerings in competition to the “pure” Public Cloud model.  Nevertheless, even these commentators acknowledge that Private (and Hybrid) Clouds are likely to be used into the immediate future as organisations come to grips with a new way of providing computing facilities.

 

What Should Run in the Cloud?

Since the “cloud” can effectively implement any computing environment (operating systems, etc), then basically anything could be run in the cloud.  As with most things in life, just because it is possible does not necessarily make it either desirable or useful (or even usable).

Typically, highly interactive applications may best operate using a desktop or workstation environment (such as high end graphics manipulation, high end development IDEs, etc).  But the boundaries between a pure cloud environment and a pure desktop environment (and, now, even a pure mobile environment) are becoming increasingly blurred.  In many instances, what were previously only desktop applications now are connected to cloud facilities, typically for storage of data, but also for additional processing capabilities (for instance, to render complex images using the additional compute resources available in cloud facilities).  In the same manner, mobile applications will store (synchronise) data using a cloud facility, thereby allowing a single view of one’s data whether using a web based interface (into the cloud), a mobile device interface (ie on a smartphone) or a desktop interface (ie a MS Windows application).

In addition, applications with extremely sensitive security profiles would most likely not be run in a public cloud or hybrid environment (although could readily be conceived as operating in a secure private cloud environment).

Everything else is amenable to cloud based operation.

 

How big is Cloud Computing?

An interesting infographic from the Wikibon site and its blog provides an insight into the current and projected size of cloud computing, including the economics of why cloud computing is here to stay …

How Big is the World of Cloud Computing?
Via: Wikibon

No Comments

ecoder – online programming editor

I have been looking around for a reasonable online (ie available over the internet from any location) editor for programming purposes.  There appears to be a dearth of available offerings (as at August 2010), and there certainly does not seem to be any online Interactive Development Environment (IDE) products out there (and by making this statement, I am hoping that people will disabuse me of this notion and apprise me of numerous excellent offerings!).

One of the online programming editor products I found was “ecoder” (http://ecoder.gmeditor.com/) – the “demo” site made it look OK – simple yet effective.  So, I downloaded it and had a try.

It is a PHP and Javascript play.  Download the zip file to a directory on your server, unzip it and follow the installation instructions.

I made the changes to code.php in the root directory as instructed:

  1. Created a directory to store my documents in – outside the www directory of the server (probably a good security idea to do so, based on experiences with other such products) and indicated that directory in the $code[‘root’] variable
  2. Changed $code[‘domain_cookie’] to my domain name (ie without the www. in front of it)
  3. Set $code[‘name’] to my own ecoder title name – even though the instructions indicated that this variable currently did not really do anything – what the heck, might as well
  4. Did not do anything with the security settings.  I could not figure out exactly what was required with respect to these security instructions.  After posting something on the forum for ecoder (in sourceforge.net) re security, the answer virtually reflected the instructions – set up your own security script, call that immediately when going to the ecoder main page (index.php), then if the operator has logged in correctly, go back to ecoder with a variable set indicating it has passed security.  What variable (name, etc)?  Does the security module have to be in PHP?  Where to put the code (in index.php or code.php)?  Having no idea of the structure of the ecoder application and not being a PHP guru (and not having a security module lying around), I decided to ignore these security instructions for the moment.
  5. I wanted to be able to edit Python and Java programs, so I modified the $_SESSION[‘tree_file_types’] variable to include py and java extensions
  6. I then set the error log path to be a directory under my repo directory (why not?) and set my error email address properly.

Now, by simply going to the address of the website folder I had loaded ecoder into, believe it or not, it actually worked.  The editor came up and I could see the home page with the dummy text in it – and I could edit it.  Well done to the author of ecoder – not too shabby in terms of getting something running in less than half an hour.

So, onto doing something with the product.

I uploaded a python program I had written.  Upload seemed to go OK.  The program made it into the left hand file listing panel – BUT, there was no way that I could edit the file I had just uploaded.  Repeated clicking on it did nothing (it’s amazing isn’t it, that when something won’t work, you keep clicking and clicking on the same area, in the hope that by the hundredth click, something miraculous might happen and it will suddenly work when it had not worked for the last 99 attempts.  I am sure that it has happened for someone in the past, but never for me – yet I keep repeating the same monkey behaviour every time).

So, upload had some problems.  I then tried to create a new program with the py extension from scratch.  Clicking “new file” gave me the dialog box to enter the filename and the drop down list with all the right extensions in it – with “py” at the top (just as I had specified in the code.php program).  Except, that when I sent to save the new file, an error occurred – the type of file was not allowed, the extension was wrong.

Next I managed to create a file with the txt extension (a plain text file) and edit it.  All OK.

Which bought me back to thinking about security.  I still couldn’t be bothered with the instructions in the code, so I went for the raw basic .htaccess and .htpassws solution.  Real basic stuff, but enough to work OK (provided I loaded the variables for Auth in .htaccess properly – made a mistake in the URL referencing the .htpasswd file and got errors – but managed to fix that relatively quickly) and got myself some raw security over the site.  Maybe one day I will go and create a nice login script – maybe a generic one for any system I have.  I am sure someone has written something out there for that as well.

But then, back to the Python editing.  Tried the py extension again (remember – Monkey see, Monkey click).  Still no luck.

Not to be defeated, I thought I would look through the code (I know PHP well enough to be able to read what was going on).  Took a little longer than I expected.

I eventually found that in “code/save/add.php” line 50, the system checks that there is a file which exists called “template.ext” where “template” is the actual string “template” and “ext” is the extension of the file to be created. Thus creating such a template file allows the file to now be created. It places the newly created file in the tree display on the left, but it now can not be selected and edited.

Uh – the whole structure of the thing now not looking so good – in that there seem to be dependencies all over the place in terms of simply specifying that one wants to edit with a different extension to a file (and no documentation at all).

But, being a determined little soul (I was going to write “sole” – with reference to the soles of one’s shoes, or feet – meaning that I now felt trodden upon and thoroughly down-trodden, but that would be an inaccuracy – I was pressing ahead!), I went searching for where the next error lay.

In the code for “type.php” (in the code/tree folder), every single extension appears the be hard-coded into this code, with specific other fields specified for a particular type of extension.   Copying the “php” lines at the bottom of this file and making it “py” finally allowed the code to work for a new type of extension.

I tried to create another file – lo and behold it not only saved, but placed the file in the left hand panel with the ability to click on the filename and edit it in the right hand panel.  Everything now seemed to be in a working state – at least working enough for me to be happy that I had an online programming editor that at least allowed some editing from my own site.

Mind you, at the end of the process, the conclusion I came to was that  it maybe a good idea to refactor some code to allow this new extension specification to all happen within the code.php file (or better still, have the complete code.php set of variables read in from a YAML file, for instance), to set the required controls.  These are the types of situations that one should probably consider building into one’s code from the beginning – have absolutely everything specified as sets of variables in configuration or control files using a standardised format, right from the beginning of the exercise.  A little more work coding wise, especially if the system is being built as one goes along, but worth the effort at the end of the day.  It would be nice to actually have a series of standard libraries to assist one in that area (I am sure they are our there – and I will need to try to find them and see how good they are).

The original author of ecoder responded to say that not much work had been done on the product for a while and that maybe it would get looked at over winter (I presume the Northern Winter, since we are in winter right now where I am – but our part of the world doesn’t count as real) – with a hint that someone may like to look at it and make some changes themselves.  Not really wanting to code in PHP (currently, Python and Groovy are the languages of choice), I might think about this a little longer.  And do some more research – now looking into the Mozilla Labs Bespin product to see whether that is what I want/need/can use.

No Comments