Archive for category Work

Persuasion

Once again, PsyBlog posts  an useful article on a rather easy persuasion technique: BYAF – go here (subscribe to his blog and buy his book).

This is what he had to say on the subject:


 

I’ll admit it. A few of the techniques for persuasion I’ve covered here on PsyBlog have been a little outlandish and impractical.

Things like swearingtalking in the right ear and pouring coffee down someone’s throat. The studies are interesting and fun but not widely useful.

The question is: which persuasion technique, based on psychological research, is most practical, can easily be used by anyone in almost any circumstances and has been consistently shown to work?

The answer is: the ‘But You Are Free’ technique. This simple approach is all about reaffirming people’s freedom to choose. When you ask someone to do something, you add on the sentiment that they are free to choose.

By reaffirming their freedom you are indirectly saying to them: I am not threatening your right to say no. You have a free choice.

A recent review of the 42 psychology studies carried out on this technique has shown that it is surprisingly effective given how simple it is (Carpenter, 2013). All in all, over 22,000 people have been tested by researchers. Across all the studies it was found to double the chances that someone would say ‘yes’ to the request.

People have been shown to donate more to good causes, agree more readily to a survey and give more to someone asking for a bus fare home.

The exact words used are not especially important. The studies have shown that using the phrase “But obviously do not feel obliged,” worked just as well as “but you are free”.

What is important is that the request is made face-to-face: the power of the technique drops off otherwise. Even over email, though, it does still have an effect, although it is somewhat reduced.

The BYAF technique is so simple and amenable that it can easily be used in conjunction with other approaches.

It also underlines the fact that people hate to be hemmed in or have their choices reduced. We seem to react against this attempt to limit us by becoming more closed-minded.

The BYAF technique, as with any good method of persuasion, is about helping other people come to the decision you want through their own free will. If they have other options, like simply walking away, and start to feel corralled, then you can wave them goodbye.

On the other hand, respecting people’s autonomy has the happy side-effect of making them more open to persuasion. You can look good and be more likely to get what you want. Nice.

 

No Comments

Fear of Loss

Now here’s a quote for you:

Employees do not fear change – they fear loss: loss of status, loss of power, loss of freedom to make decisions, and loss of purpose.

It comes courtesy of Judith Glaser and her DRIVE methodology, part of her Creating WE offering.

Makes sense really – and completely re-frames the whole “change management” chestnut that is rolled out in organisations all the time.

 

No Comments

Getting Things Done – Boost Your Productivity

No, this is not an article on the GTD todo list or task management system.  But it is in the same field – a simple method of organising yourself and your day so that you actually end up achieving what you want to achieve (and not procrastinate with following interesting tidbits of trivia and doing urgent work (see my previous post on achieving priorities) or just wasting time to get through another day – and paycheck).

This article is by Sami Paju who blogs on positive psychology, productivity and human performance according to his blog byline.  I have just come across this blog, but it is rather interesting and serves up useful tidbits (there I go again, wasting my time on tidbits – just like I am not supposed to do!).  Subscribe to his blog and give him a go.

The productivity article, some of which I have reproduced below to give you a flavour of the full thing, is about organising what you should be doing into chunks of 30 or 60 minutes in a calendar, as proper diary entries – just as if they were important meetings which you have to attend (and be prepared for).  It certainly works and worth remembering whenever you hit a rut.  Enjoy … …

Read the rest of this entry »

No Comments

Achieving Priorities

Quora just posted an interesting article entitled: “How to master your time“, written by Oliver Emberton.  A little simplistic in places and occasionally brutal, it nevertheless provides an important reminder on how to go about achieving something that you want to achieve, rather than continually distracting yourself into a zero output oblivion.

I am sure that Oliver won’t mind me reproducing the core elements of what he said – go read the original – it has some nice little pictures in it … …

No Comments

10 Technology Trends for 2013

Around this time of the year (ie New Year’s), people do like predicting what is going to happen in the forthcoming year.  Apart from the obvious “game” (chortling at the predictions from years gone by which have nowhere near come true), there still is usefulness associated with both preparing as well as reviewing predictions for the future.  They do assist in focussing one’s thoughts, thinking through issues of importance, identifying directions and informing action.

So, in this spirit, Lunarpages (who are great hosting providers, in the same league as Dreamhost, also great hosting providers), offer their 10 Technology Trends for 2013.  Enjoy!

I could not just leave it with the Lunarpages futures.  Here are a couple of extras …

  • A “liquid” screen which raises real buttons/keys when required (and they disappear when not required for typing), from Tactus Technology (see TechCrunch article);

 

Read the rest of this entry »

No Comments

PsyBlog’s 10 Most Popular Psychological Insights From 2012

Here is PsyBlog’s 2012 Top Ten – definitely worth a read again (and again and again): http://www.spring.org.uk/2012/12/psyblogs-10-most-popular-psychological-insights-from-2012.php

No Comments

Six Honest Serving-Men

I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.

  • Rudyard Kipling
semantic-web: , ,

No Comments

Zuhandenheit and the Cloud UI

Derek Singleton from Software Advice recently wrote a blog article postulating that maybe it was now time for a set of standards associated with the user interface for cloud applications to be formulated. I think he was mainly referring to business applications in the cloud – you know, ERP systems, CRM systems, etc.

Reasonable enough concept, and easy to understand within the frame of retrospective analysis of technological (and, in many cases, social) advances. Standardisation typically allows one to conveniently abstract out (sometimes complex) details of particular technologies, such that one can concentrate on the task at hand, rather than having to focus on solving a foundational issue for every case. Standards allow for rapid incremental progress.

The ICT world is replete with examples of standardisation assisting in such a manner. Indeed, one could argue that most of ICT only works because of such an approach. I am writing this on a tablet computer, sitting outside in the early morning, experiencing one of the most beautiful environments in the world. The tall straight trees crowned in a verdant canopy juxtaposed against the bluest of cloudless blue skies stretching forever would bring joy to any soul in any situation. (Why, then, you should ask, would I be writing about Cloud UI standards when presented with such a sight? Well might you ask. Because there is no answer to such an antithetical query. Let us continue).

I can concentrate on thinking and writing what you are now reading because I do not need to think about how the buttons on the keyboard connected to the screen are translated into electrical signals, which are then interpreted by an operating system (in simple terms – I don’t want to explain everything about every little element of computers) into characters for input into a program, which will then issue instructions to display said characters on a screen, not worrying about how shapes are pixelated to create a readable representation for me to understand and continue writing from.

Not only do I not need to worry about such issues, but the creators of the program I am currently using (Evernote – fast becoming a “must-have” on every portable computing device in the world) also did not need to worry about such issues. The abstraction of the operating system from base hardware, according to various standards (de facto and de jure – in a sense) meant that they could concentrate on implementing a wonderful information management tool. Further than that, a standard approach to certain elements of use of the operating system meant that they could implement their program relatively straight-forwardly (I am not saying it might have been easy) on a number of operating systems.

Such capabilities would have been unthinkable fifty years ago (in ICT terms). I am also able to take advantage of a WiFi connection to sit outside and enjoy the world around me (looking up when the kookaburra call intrudes, to once more savour the sight), in order to record information I have typed through the internet on a server somewhere in the world (as well as on my tablet). Do I need to know about channels and frequencies, IP addresses and DHCP, routers and relays? I think not. Thank goodness I don’t. If I had to deal with any of these issues everytime I had to write something, nothing would ever get written. Ditto for the Evernote programmers. Ditto for the Android programmers. Ditto for providers of the cloud service running the data storage service. Ditto for almost everyone marginally involved in the vast network of relationships of functional delivery of this technological world, apart from a small number of people who must actually create or fix such networking software and hardware.

This is standardisation at work. And it works. Much more than it doesn’t work. And it creates and creates and creates. It applies not only to ICT, but to engineering generally, science generally, any technological endeavour generally, human centred and planned systems generally, and even throughtout society. If one wanted to wax philosophical, there could be some deep reasons behind all this.

Which indeed there appears to be. Through the work of Martin Heidegger (1889-1976), one of the most pre-eminent philosophers of the 20th century. Do yourself a favour (to channel a minor celebrity). Read up on the works of Martin Heidegger, even if you don’t read his actual works (he is a little hard to get into – not the least due to the very specific terminology that he uses, relying as it does on the original German). Heidegger attacked questions of the essentialness of Being, what it is to exist. Within the world in which we live.

His work, thus, addresses issues of how we, as humans, operate within the world, how we interact with things and other entities. This has applicability not only to general human behaviour, but also to computing – how we use and interact with computer systems (as artefacts in the world). This thesis was developed and written about by Fernando Flores and Terry Winograd in their seminal book “Understanding Computers and Cognition”. They described the Heideggerian concept of Zuhandenheit (“Ready-at-Hand”) in computing terms. In essence, ready-at-hand (or readiness-at-hand) is the concept of how some artefact / entity becomes part of our world in such a manner that one does not need to think at all in order to use the artefact. It’s place in the world (which includes how it is used) is at all times “ready” for us, to use (or relate to). There is no necessity for one to remove oneself from the world situation one is currently in, in order to deal with the artefact. It is only when the entity changes from being Zuhandenheit to Vorhandenheit (meaning “Present-at-Hand”) that is comes into the foreground of one’s attention, and must be dealt with consciously in some manner.

The famour example used is that of a hammer. In normal circumstances, for most people bought up in a civilised Western tradition (the context is rather important for the concept of Zuhandenheit), a hammer is “ready-at-hand”. One simply picks it up and hammers, without needing to consciously think about how to use the hammer. It is only when the hammer is broken in some manner that it becomes “present-at-hand”, when the item is now “present” in front of one, requiring one’s attention – in order to fix it, or work out how to use it in it’s broken state.

Present-at-hand means that one is not concentrating on the task required or desired, but rather, must focus attention and consciousness onto the entity which is present-at-hand, determining what to do with it and how to use it, possiby fixing it, before being able to then re-focus on the task-at-hand.

Present-at-hand is a necessity in many situations, particularly novel circumstances, but is positively detrimental if it becomes overwhelming. Items present-at-hand must fade into reasy-at-hand to be able to successfully navigate the complex and chaotic world one lives and works in.

This concept of Zuhandenheit applies directly to computing. One could argue that the most effective computing is one that is fully ready-at-hand. No need to think, simply access the computing facility and it is done for one. It is the dream of the artificial intelligence community (for many of them). It is represented in science fiction – such as the computer (with the voice of Majel Barrett) in Star Trek. Always there, simply need to talk to it, perfectly divines one’s needs and intentions, and then executes without error. On the bridge of the Enterprise, there is never a need to get the manual out to work out which menu item hidden five deep in a dialogue box, which arcane key combination, which parameter in which format for which API call one needs to know, understand and apply to get something done. If that was the case, the Klingons would have long ruled the Empire (to mix filmic metaphors a touch).

Of course, the current state of the art is light-years removed from the science fiction of Star Trek and other futuristic visions. But the basics of the concept apply to everyday use of computers. It would be a tedious working life if every time one had to type a memo, one had to look up help or read a manual or ask for assistance to perform the simplest of activities – to underline a phrase, or to indent a paragraph, or edit a mis-typed word. Work would barely be finished if every document was an arduous “hunt and peck” on the keyboard.

For most of today’s office workers, the QWERTY keyboard is ready-at-hand. One does need to think to use the keyboard (even if one is looking at the keys to ensure that the fingers are in the correct place). One can concentrate on what is to be said rather than where on earth is that “!” key. The readiness to hand only needs to marginally break for the level of frustration and problems to become apparent with the present-at-hand. If you were bought up in North America, England, Australia, etc, have you ever tried typing on a German keyboard, or a Spanish keyboard, etc? Have you gone to hit the @ key for an email address and found it is a completely different character? Now where is that @ key? I simply want to type in my email address, which typically takes all of one second, yet here I am desperately scanning every key on this keyboard to find a hidden symbol. Finally, after trying two or three other keys, there it is. Thank goodness. Only to next be snookered because the Z key is now somewhere else as well.

But being ready-to-hand does not necessarily mean the best or most efficient (or effective). Over the years, many people have contended that a Dvorak keyboard is a much better keyboard layout to use, from a speed, accuracy of typing, and ergonomic perspective. But, it has never caught on. Why? Mostly because of the weight of “readiness”, the heaviness of “being”, which is the qwerty layout.

People develop what is loosely called “muscle memory”. Their fingers know where to go, just through the muscles alone – they do not need to think about where their fingers need to be in order to type a word – the muscles in the fingers automatically move to the correct spots. No thinking required. All the mental processing capacity in one’s brain can be applied to what is being written rather than the typng itself (note the ready-to-hand aspect). It is just too much effort to re-train one’s muscles, with little perceived benefit. And if all the consumer population “profess” to wanting/using a qwerty keyboard, then all the vendors will supply a qwerty keyboard. And if the only keyboards commonly available are qwerty keyboards, then people will only know how to use a qwerty keyboard, their muscles will be trained, and they will only want to use a qwerty keyboard. And thus the cycle repeats on itself. Self reinforcing. Qwerty keyboard it is, from now until eternity.

Unless, there is a disruption or discontinuity. So, if there is no keyboard, how will one type? Or if there are only a limited number of keys, how does one enter text from the full alphabet? Enter the world of mobile phones, and then smartphones. With only a limited number of numeric keys, text can be entered using innovative software (ie predictive text). When first attempted, this situation is blatantly “present-at-hand”. It takes some time to get used to the new way of entering text/typing. But for most people, some continued practice whilst fully present-at-hand soon leads to such typing being ready-at-hand (although maybe not as effective or efficient as a full sized qwerty keyboard).

But, be aware, not all people make the transition from the present-at-hand to the ready-at-hand. Some people just can not use predictive text entry – they revert to previous methods of phone use, or no texting at all.

What happens if there are not even numeric keys. Thus, the touch sensitive smartphone (or tablet) can emulate a qwerty keyboard (or slightly modified qwerty keyboard) but the reduction in efficacy of the touchpad qwerty keyboard makes it rather less ready-at-hand (each time a character is not typed correctly, or multiple keystrokes have to be executed to enter a single character (such as when entering some of the special characters using the standard iPhone onscreen keyboard), or a word is auto-corrected incorrectly, then the artefact is “broken” – a little like the broken hammer, and it has to be used specially, now “present-at-hand” in order to achieve the end result) and thus susceptible to allowing one to learn an alternative process of entering text (a new present-at-hand leading, possibly, to being ready-at-hand) which may be more efficient. The barriers to learning anew are reduced, allowing the learning to be attempted, and the new thereby mastered.

Such thinking tends to be somewhat embedded in design culture. Take, for example, this excerpt from an article in Infoworld on the use of iPads in SAP:

“Bussmann is a big fan of design thinking, as he says the mobile experience is different even for things that can be done on a PC. He notes that people seem to grok and explore data more on an iPad than a PC, even when it’s the same Web service underlying it. He notes that PCs tend to be used for deep exploration, while a tablet is used for snapshot trend analysis. He compares email usage on a PC to email usage on a BlackBerry, the latter being quicker and more of the moment. If mobile apps were designed explicitly for the different mentality used on a tablet, Bussmann believes that the benefits of using iPads and other tablets would be even stronger.”

The point is that different artefacts have different contexts, leading to different modes of operation (new elements are ‘ready-to-hand’ which allow new modes of thinking and operation).

Good desgn (in a computing context) is all about readiness-to-hand. How to design and implement a facility which is useful without requiring excess effort. How to design something that works without undue learning and constant application of thought.

What does this mean in relation to UI standards?

Such standards are an attempt to generate a readiness-to-hand. If every keystroke means the same thing, across all applications, then “muscle memory” can kick in, and one need not bother learning certain aspects of the application, but rather, can commence using it, or use various (new) features, without undue effort.

A simple example. In many applications under MS Windows, the F2 key “edits” the currently selected item, say a filename within a folder, allowing one to change the name of a file. When working with many documents and files on a continuous basis, it has become almost instinctive for the second littlest fnger on the left hand to reach for and hit the F2 key, then type in the new name for the file. When hitting the F2 key does not result in allowing the filename to be changed, but rather, results in some other action, frustration soon sets in (and by soon, one means explosively and immediately).

The next question is – which key renames a filename? None of the current F keys. Access “Help”. Find that it is Shift-F6. Now, who would have thought that? Why pick Shift-F6? Why not Ctrl-Alt-Shift-F12? Or any other key combination. I am sure that the author of the software had a good reason to use Shift-F6 for file rename (indeed, if one carefully reviews all the key combinations for functions in the program, one can see some logic relating to the assignments of keys. Further, in some deep recess of memory, I recollect that the key assignments may relate to use with another operating system, in another conext, according to another (so-called) “standard”. None of which is much help when one is suddenly thrust from readiness-at-hand into present-at-hand and spends valuable time trying to achieve a very simple operation.

Fortunately, the author of the program wrote his code such that every function could be mapped to a different key (and vice-a-versa obviously). This means that I can re-map the F2 key to perform a file rename. A bit of a pain when it has to be done on every computer that the program is used on (the trauma and destructiveness of the PC-centric world – which, one day, the “cloud”-world will fix), but the benefit of having File-Rename ready-to-hand outweighs the cost of having to perform a key re-mapping once in a while (for me at least).

Unfortunately, other people may not be quite as “techno-savvy” as I am, not realising that it is possible to re-map the function keys. Or, programs are not coded in such a flexible manner, such that function keys can not be re-mapped within the program. Thus, people are “forced” to endure a facility that is plainly not ready-at-hand – and waste hours of valuable time, building levels of frustration that can never be relieved. It is no wonder that many people find dealing with computers difficult and “non-intuitive” (a nebulous ill-defined term that I normally eschew, in favour of the more technical and proper philosophical and psychological terms which may apply – but used here as a reference to the nebulous ill-at-ease feeling that people dealing with such non-ready-at-hand computing describe as being “non-intuitive”), and that people, once they have learnt one program or system etc are loath to learn another, and are slow to embrace additional capability or functionality within the program or system that they (purportedly) know.

Or, in another twist on not being ready-at-hand, the program (or operating system, or other facility) may simply not implement the desired capability. Thus, on the tablet I am currently using to type this exploration, there are no keystrokes to move back (or forward) a single word, even though the tablet has a keyboard attached (which is not too bad, but does miss keystrokes on a regular basis – thus making it also non-ready-at-hand in an irritatingly pseudo-random manner), and that keyboard has arrow keys and a control key on it. Using the Android operating system and Evernote (and other programs), there is no simply quick keyboard oriented manner to go back many characters to fix a typo (refer to the previous problem with the keyboard missing characters, necessitating the need to quickly reposition the cursor to fix the problem). Thank goodness that the operating system / program implemented the “End” key, to quickly go to the end of the line. Under MS Windows, Linux and other operating systems, and the major word editing programs on those systems, the Ctrl-Left Arrow combination to go back one word is typically always implemented. When it is isn’t, it is sorely missed.

No doubt, the developers of the Android tablet based systems are working with the paradigm that most of the navigation will be performed using the touchscreen capabilities – and therefore that will suffice for moving back a word or so. Little do they realise that forcing someone to take their fingers off the keyboard and try to place them on a screen, in a precise location is violently yanking them from being fully ready-at-hand (typing, typing, typing as quickly as they can think) into a difficult present-at-hand situation (have you ever tried to very precisely position a cursor in smallish text on a partially moving screen with pudgy fingertips?).

Thus, one of the issues for readiness-to-hand and computing relates to the context in which an element is meant to be ready-at-hand. Mix contexts, take an element out of one context and place into another, and suddenly what works well in one place does not work so well anymore.

Which leads back to the question of standards for UIs. Such standards will need to be thought through very carefully – lest they lead to further “presentness”, and not the desired effect of “readiness”. This is predicated, obviously, on the proposition that the main reason for promulgating and adopting standards is to effect a greater reasy-at-hand utility, across the elements of one’s computing usage. The standards are meant to enhance the situation whereby knowledge gained in the use of one system, program or facility can be readily transferred into the use of another system, program or facility.

So, promulgating a standard that “theoretically” works well given one context may actually lead to deterioration in readiness-at-hand within another context. This may be at the micro level – for instance, a standard for certain key-mappings which works well within a browser based environment on a PC may be completely ineffective on a smartphone or tablet. And even detrimental if the alternatives to using key are not well thought through on the other platforms.

The “breakage” may be at a more macro level, whereby standards relating to screen layout and menu’s and presentation elements may be completely inappropriate and positively detrimental when the program requires a vastly different UI approach in order to maximise its facility (efficacy and efficiency). Thus, such screen presentation standards for an ERP suite may be wholly inappropriate for the business intelligence aspects of that ERP suite.

Or, to put it another way, one may implement a BI toolset within the ERP suite that conforms to the prevailing UI standard. Its readiness-at-hand will be sub-optimal. But the hegemony of the “standard” will prevent efforts at improvements to the BI UI that do not meet that standard, thus stagnating innovation, but more importantly, consigning those using the system to additional and wasted effort, as well as meaning that such a system will not be used, or not used to the extent that it could or should. I am sure that many readers have numerous examples relating to implementing systems, which on paper, tick all the boxes, seem to address all criteria, yet are simply not used by those in the organisation that sch systems are intended for (or used grudgingly and badly). One of my favourite tropes in this area are corporate document management systems. Intended for the best possible reasons, have the weight of corporate “goodness” behind them (just like white bread), yet never seem to deliver on promises, are continually subverted, and blatantly fail the readiness-at-hand test, both for material going in and for retrieving material later. Unfortunately (or fortunately for this missive), that is a story for another day.

When faced with the intransigence of a “standard”, either de facto or de jure, it typically takes a paradigm shift in some other aspect of the environment to allow a similar shift with respect to the “standard”. Thus, in recent times, it was the shift in terms of keyboard-less touchscreen smartphones and tablets which required a rethink of the predominant operating system UI. For a variety of reasons, that UI (or, let us say, UI concept) quickly came to prominence and predominance (debateable but defensible if one considers that Apple, as the main purveyor of such UIs, has the greatest capitalisation of any company in the world at the moment. Not directly relating to the UI (ie there is no direct causal connection), but with the UI no doubt making a contribution to the success of the offerings).

It took that paradigm shift across a range of factors in the “world-scene” of computing to make such large changes to the UI standards. The incumbent determinant of the standard (in this instance, let us say, Microsoft Windows) would not make such large-scale changes. Their intertia, and the inertia of those using the system, would not allow large-scale changes to be made. Not until faced with the overwhelming success of an alternative that such an alternative was belatedly added to the UI for Windows (well, in what could graciously be called an embodiment of the new alternative UI, if that is what one so desires).

This is not to say that such intertia, or the definition of a “standard” (de jure or de facto) is necessarily a bad thing. Making too many changes too quickly, in any environment, leads much to be “broken” into present-at-hand, requiring too much effort simply to achieve day-to-day operation. Destruction of readiness-at-hand becomes so unsettling that little effective is achieved. Herein lies the rub with “change” within an organisation. Change is a necessity (change is simply another name for living and thrving. If one’s body never changed, one would be dead. Think of the biology – the deep biology), but any change means at least something becomes present-at-hand, at least for a moment, until it is processed and absorbed into being ready-at-hand. The secret to success at change management is minimising this disruption from ready-at-hand to present-at-hand to ready-at-hand again.

“Inertia” is the means whereby learned behaviours are most efficiently exploited to the greatest effect – PROVIDED that the environment or context has not changed. Such a change necessitates a re-appraisal, since, as per the definition above, something ready-at-hand in one context or environment will not be so in another (almost by definition).

Thus, a standard for the UI of Cloud based applications, principally ERP applications (and those applications orbiting within the ERP “solar system”) is a “GOOD THING” – given that the contextualisation of those standards is well thought through – well defined, well documented and very applicable, and that the awareness is always pre-eminent that such standards may retard innovation, and means whereby use of such standards can co-exist (in some manner, and it is yet rather unclear how this could actually work in practice) with a new or different set of UI for different purposes.

, , , , ,

No Comments

Being Creative

YAPP (Yet Another PsyBlog Post) …

This time about being creative.

Apart from the discussion on different or unusual thinking styles (for the person themselves) to enhance creativity, the biggest “take-away” for me from the post was the exhortation to simply remind people that they are to be creative (say, in a particular situation), and this simple reminder will give them permission (so to speak) to be creative.  This is, thus, rather important in a business setting, because in many (if not most) instances, people are told, either explicitly or implicitly, to just follow the rules or procedures and do what has been done before, rather than being allowed to express some creativity in order to solve a problem or improve the situation at hand.

The relevant paragraph is:

Another way of encouraging creativity is simply to be reminded that creativity is a goal.  It seems too simple to be true, but research has found that just telling people to ‘be creative’ increases their creativity (e.g. Chen et al., 2005).

Anyway, the full article is below, and please consider buying Jeremy’s e-book “How To Be Creative” (see ad at the bottom of the article).  I bought it – it is a great little read.

Unusual Thinking Styles Increase Creativity

Post image for Unusual Thinking Styles Increase Creativity

Psychological research reveals how rational versus intuitive thinking can inspire new ideas.

The idea of creativity is wonderful: that a spark of inspiration can eventually bring something new and useful into the world, perhaps even something beautiful. Something, as it were, from nothing.

That spark may only be the start of a journey towards the finished article or idea, but it is still a wonderful moment. Without the initial spark there will be no journey. It’s no exaggeration to say that our ability to be creative sits at the heart of our achievements as a species.

Do incentives work?

So, how do you encourage creativity in yourself and in others? I discuss this question of how to be creative in my recent ebook on creativity. There I describe six principles, based on psychological research, that can be used to understand and increase creativity.

But, what methods do people naturally use to encourage creativity? In the creative industries the usual method is money, or some other related incentive. So, can incentives encourage people to be creative?

According to the research, they can, but crucially these incentives need to emphasise that creativity is the goal (Eisenberger & Shanock, 2003). Studies find that if people are given an incentive for just completing a task, it doesn’t increase their creativity (Amabile et al., 1986). In fact, incentives linked to task completion (rather than creativity) can reduce creativity.

Another way of encouraging creativity is simply to be reminded that creativity is a goal. It seems too simple to be true, but research has found that just telling people to ‘be creative’ increases their creativity (e.g. Chen et al., 2005).

The theory is that this works because people often don’t realise they’re supposed to be looking for creative solutions. This is just as true in the real world as it is in psychology experiments. We get so wrapped up in deadlines, clients, costs and all the rest that it’s easy to forget to search for creative solutions.

People need to be told that creativity is a goal. Unlike children, adults need to be reminded about the importance of creativity. Perhaps it’s because so much of everyday life encourages conformity and repeating the same things you did before. Doing something different needs a special effort.

Rational versus intuitive thinking

However telling someone to ‘be creative’ is a bit like telling them to ‘be more clever’ or ‘be more observant’. We want to shout: “Yes, but how?!”

Along with the techniques I suggest in my ebook, another insight comes from a new study on stimulating creativity. This suggests one solution may lie in using an unusual thinking style—unusual, that is, to you (Dane et al., 2011). Let me explain…

When trying to solve problems that need creative solutions, broadly people have been found to approach them in one of two ways:

  1. Rationally: by using systematic patterns of thought. This involves relying on specific things you’ve learnt in the past, thinking concretely and ignoring gut instincts.
  2. Intuitively: by setting the mind free to explore associations. This involves working completely on first impressions and whatever comes to mind while ignoring what you’ve learnt in the past.

The researchers wondered if people’s creativity could be increased by encouraging them to use the pattern of thinking that was most unusual to them. So, those people who naturally preferred to approach creative problems rationally, were asked to think intuitively. And the intuitive group was asked to think rationally for a change.

Participants were given a real-world problem to solve: helping a local business expand. The results were evaluated by managers from the company involved. When they looked at the results, the manipulation had worked: people were more creative when they used the thinking style that was most unusual for them.

One of the reasons this may work is that consciously adopting a different strategy stops your mind going down the same well-travelled paths. We all have habitual ways of approaching problems and while habits are sometimes useful, they can also produce the same results over and over again.

A limitation of this study is that it only looked at the generation of new ideas. This tends to occur mostly at the start of the creative process. So once ideas have been generated and a more analytical mindset is required, these techniques may not work so well (I discuss this balance between a wandering and focused mind in principle six of my ebook).

Image credit: gfpeck

How to Be Creative

No Comments

Why Concrete Language Communicates Truth

Another pertinent post from PsyBlog, this time about communicating (mostly writing) effectively. Particularly relevant for consultants, but applicable for all employment in all fields. Click on the header below to go to the original article.

Have I said before that you should be subscribing to PsyBlog? Well, it is about time you did so – click here!

Speak and write using unambiguous language and people will believe you.

I’ve just deleted a rather abstract introduction I wrote to this article about truth. The reason? I noticed I wasn’t taking the excellent advice offered in a recent article published in Personality and Social Psychology Bulletin. That advice is simple: if you want people to believe you, speak and write concrete.

There are all sorts of ways language can communicate truth. Here are some solid facts for you:

  • People usually judge that more details mean someone is telling us the truth,
  • We find stories that are more vivid to be more true,
  • We even think more raw facts make unlikely events more likely.

But all these involve adding extra details or colour. What if we don’t have any more details? What if we want to bump up the believability without adding to the fact-count?

Just going more concrete can be enough according to a recent study by Hansen and Wanke (2010). Compare these two sentences:

  1. Hamburg is the European record holder concerning the number of bridges.
  2. In Hamburg, one can count the highest number of bridges in Europe.

Although these two sentences seem to have exactly the same meaning, people rate the second as more true than the first. It’s not because there’s more detail in the second—there isn’t. It’s because it doesn’t beat around the bush, it conjures a simple, unambiguous and compelling image: you counting bridges.

Abstract words are handy for talking conceptually but they leave a lot of wiggle-room. Concrete words refer to something in the real world and they refer to it precisely. Vanilla ice-cream is specific while dessert could refer to anything sweet eaten after a main meal.

Verbs as well as nouns can be more or less abstract. Verbs like ‘count’ and ‘write’ are solid, concrete and unambiguous, while verbs like ‘help’ and ‘insult’ are open to some interpretation. Right at the far abstract end of the spectrum are verbs like ‘love’ and ‘hate’; they leave a lot of room for interpretation.

Even a verb’s tense can affect its perceived concreteness. The passive tense is usually thought more abstract, because it doesn’t refer to the actor by name. Perhaps that’s partly why fledgling writers are often told to write in the active tense: to the reader it will seem more true.

Hansen and Wanke give three reasons why concreteness suggests truth:

  1. Our minds process concrete statements more quickly, and we automatically associate quick and easy with true (check out these studies on the power of simplicity).
  2. We can create mental pictures of concrete statements more easily. When something is easier to picture, it’s easier to recall, so seems more true.
  3. Also, when something is more easily pictured it seems more plausible, so it’s more readily believed.

So, speak and write solidly and unambiguously and people will think it’s more true. I can’t say it any clearer than that.

Image credit: Lee Huynh

No Comments