As predicted, Apple announced their flatter-designed iOS 7 yesterday. We all knew it was coming and yet there are some people who seem to be surprised and disappointed by it.
I understand people’s concerns, but iOS 7 was basically redesigned from scratch. Years of design evolution have been effectively thrown away so it could be started again. iOS 7 is not a refined and complete product, but a platform to build on and continuously evolve. It is a new starting point, and it surprises me that some designers have missed this argument completely.
iOS 7 is not perfect, far from it, but it marks a shift in the design principles of the OS, and a signal that Apple’s approach to design is changing for all products, for the better, and moving forwards. We will see iOS 7 change, adapt, evolve and we will see criticisms and questions answered over time. One day it will be a ‘completed’ product, but we have to allow time for it to fit into it’s own boots first.
There are definitely things I would like to see changed in iOS, likewise, there are things I love. Give it time and it will all come together, and if it doesn’t – well there’s always Android!
There has been a lot of speculation about Jony Ive’s involvement in the next version of iOS, expected to be announced and previewed at WWDC next week, something I was also quick to comment on.
Initially the consensus was that the OS would feature a flatter design, marking Apple’s transition away from superfluous skeuomorphic UI elements, like the glossy icons and stitched leather toolbars.
However, there are some who believe that the change in UI would not be as drastic as proponents of the flat-style think (or hope), and it would not be an overhaul but a gradual transition. One of the arguments I have seen in favour of this is that, “Jony Ive is not a graphic designer”.
Whilst it is true that Jony Ive isn’t strictly a graphic designer, it is also a silly point to make. As long as Jony has a firm grasp of the principles and concepts of design, he has a right to direct the company’s creative output, regardless of what the design discipline is.
He hasn’t taken on the role of a graphic designer, he isn’t personally sitting at a computer and designing the software elements, he is overseeing and directing the implementation of UI design. His role is in working with the graphic designers, and bringing together his product designs with UI design in a way that unifies and completes the product. This is something Apple has always advocated (the union of hardware and software design), and putting Jony in charge of it all is a move that could potentially place Apple in a much stronger position creatively.
I think Apple knows that their iOS interface is becoming increasingly dated, and to compete in this fast-paced market, they have to continue to innovate. The UI is an obvious place to start, with many elements being left largely unchanged since the original release. I’m not sure how radical the next version will be, whether they will make the changes straight away or gradually release them over the next few versions, but I’m almost certain they will look to make some big changes.
There’s only one way we will know what’s happening for sure, and that’s to wait for it to come from Apple themselves (and we won’t have to wait much longer).
Should you create a design that is ‘timeless’? It’s something that has been debated by many people, and whilst there are many good arguments in favour of it, I personally think that attempting to achieve a ‘timeless design’ is misguided and impractical.
In the early 2000′s, Apple debuted their Mac OS X operating system, in stark contrast to the comparatively ugly Windows operating system, though if you look at those early releases of OS X today, you’ll see that it isn’t as amazing as it probably seemed back then. Design is not a constant, it is an ever shifting discipline, it is a continuous evolution of thought, blending aesthetics with science. No one can accurately predict where design will be in 10 years time, we can only guess and see how wrong we were when the time comes. OS X has moved with the times, and it has proven that hindsight can make fools of us all. Remember brushed metal? I still have nightmares about it.
People often point to the works of Dieter Rams as examples of ‘timeless design’, unfortunately, whilst his works are wonderful pieces of art, demonstrating minimal aesthetics and simplified functionality, they are not timeless. They are not the creations you would expect to buy in a shop today, and whilst they carry a sense of nostalgia and are still well designed from a design theory point of view, his works appeal to mainly design connoisseurs, creative geeks and hipsters (probably). They still look great, but they are aged and not consistent with the design trends of today. What Dieter Rams did do though, was create a set of guidelines and develop a framework from which he believed all good designs should follow, and by following these guidelines, you could get as close to ‘timeless design’ as possible.
You can not fight off the effects of time, nor what time does to our perceptions and expectations as consumers or designers. You can create designs that age better than other designs, but ‘timeless’ isn’t something we should be aiming for anyway. Design is a reflection of the time we are in, and what looks good today may not look good tomorrow, and that’s okay.
The Apple rumour mill has been working extra hard recently, as we run up to WWDC, and the most interesting rumour is arguably that iOS 7 is under heavy development and will feature a flatter UI style.
I try not to pay too much attention to rumours, not necessarily because they’re likely to be inaccurate (though a rumour by definition carries a certain degree of uncertainty and speculation), but because I’d rather work with absolutes. The only time we will know anything for sure, is when Apple announces it. Until then, despite speculation and analysis to the contrary, we can’t be sure of what is really going on. However, the fact that Scott Forstall has left (or been kicked out of) Apple is a strong indicator of the future, and potentially marks an important change, one that will resonate through the company and could mark a new era of interface design.
Again, I can’t comment with anything other than idle speculation, but it also doesn’t take a genius to see that things are changing with regards to Apple’s approach to UI design. With Scott Forstall (an avid supporter of skeuomorphism) gone and Jony Ive (a supporter of flat design, an anti-skeuomorphism movement) at the healm, it certainly adds substance to the rumours that iOS 7 will feature a flatter design.
The word is that extra development power is being directed into the development of iOS 7 from OS X (10.8), and that would certainly make sense if there is going to be a UI overhaul. Just how far Jony will go is anyone’s guess, but we won’t have to guess much longer.
The death of Google Reader has brought up a few serious considerations about our dependence on suppliers, providers and third-party platforms.
Google Reader has a small but devoted following and we shouldn’t kid ourselves into thinking that it was axed for any other reason than profit. In Google’s eyes, it simply wasn’t profitable anymore and they had no interest in spending time and money on trying to utilise their existing (though decreasing) user-base to improve things. I don’t blame them, it’s business, but it raises questions about how much we all depend on platforms and services that are out of our control.
If you have a blog hosted on Tumblr, how do you know that it simply won’t go away in a few years? Or if you have a Gmail account, what would happen if Google decided that it was no longer a valuable platform to them? A service provider only has to provide that service for as long as it makes business-sense for them to do so, and they can kill their services at anytime depending on the provider’s circumstances.
I’m not saying that we shouldn’t use third-party systems, I’m just saying we shouldn’t expect them to last forever, and we shouldn’t forget that our content and data is in someone else’s hands. If something is really valuable to you, then put it somewhere safe. Host your own blog, use your own email domain, store your own portfolio, and so on. That way, if something goes wrong with a provider, you still have everything you need and it reduces the amount of third party systems you have to rely on and worry about.
If you have to interface and communicate with people on a daily basis then you can consider yourself a brand, regardless of whether you freelance or not. You have an image that you can control and a reputation to maintain.
Your online profile is your identity, and what you say and do is a massive part of it. But like brands in the business world, you have the power to control how you are perceived, you can develop a ‘brand image’ and you are ultimately responsible for what you have to offer the world.
If you are a freelancer, the need for personal branding is obvious, but it isn’t restricted to people who directly make money from operating as a brand, it’s also for anyone who cares to create one. You can give yourself a logo, develop a site, manage your identity, run Social Media/PR on behalf of yourself and run ads. The benefit is usually financial gain, or as part of reputation development, or just for the satisfaction and fun of having your personal brand as a project or hobby.
Over the past few years, we have seen an emergence of personal branding, through YouTube, Twitter and Facebook. These are people who have earned a following and aren’t considered celebrities in the real world, but have somehow managed to develop brand equity online. Some of these people make financial gain from it, others just seem to run it for the sake of their ego.
I know that some people do it for pure vanity, but I think people also do it because doing so distinguishes them from a ‘consumer’ and defines themselves as a ‘creator’, something which carries the connotation of being more valuable (regardless of whether that is true).
Owning your own brand can sometimes perpetuate the delusion of fame for the individual, but I don’t think everyone is that naive. I think there are people out there who feel they have something to share, whether it’s with 10 people or 1000. Sharing knowledge and controlling your brand isn’t a bad thing when it’s for the right reasons.
One of the big trends in design right now is ‘flat design’, a trend which is essentially composed of graphic elements stripped of skeuomorphism and overly intricate features, in favour of harmonising block colours, extensive negative space and slightly-rounded sans-serif fonts. The problem with flat design is that too many creatives have described it ‘as the future’, and they are wrong.
It certainly has a big part to play in design now and increasingly so in the future, but it isn’t the sole method of design going forward, it doesn’t hold the monopoly on “good design”, and this is the distinction that some designers are failing to make. Flat design has always existed and it always will, but the exaggerated use of flat design is a trend. It isn’t one that I disagree with, and it isn’t one that will go away, but it is one that will and can co-exist with other design approaches, such as skeuomorphism.
Let’s be honest, ‘flat design’ is a bit more than a trend, it’s a movement, and like all movements – it is reactionary. It was born out of the frustrations and problems that had become rife within design, such as overly detailed and complicated user interface design (iOS being a perfect example), which borrowed too heavily from ‘real world objects’. The argument being (and I have made this before), that complexity distorts things and that many functions within technology do not have a ‘real world’ counterpart (an eBook can be made to look like a book, but what should a web-browser look like)?
Flat design, however, is the opposite extreme. It is not a solution, but a reaction. This doesn’t devalue what flat design does, and it has a very important part to play in design over the next few years, but it will not kill skeuomorphism. If anything, it may inspire a more neutral and well-balanced design trend, where gradients aren’t shunned, but where intricate skeuomorphic elements are.
Microsoft have implemented flat-design in their new Windows operating system, and they have done it well, but whilst things may look prettier in marketing materials and product photos, in practice it dilutes the user experience. Without shadows, shading and gradients, things look flatter (I know, it’s kind of the point), but without depth and perspective everything bleeds into each other.
I like flat design, I have used it and will use it again, but it’s not the be-all and end-all.
Google have announced that they are culling Google Reader in just a few months time, and whilst it is an ever-declining niche product, it still has a very loyal following of hardcore fans, whom up until now depended on Google Reader as the backbone for their RSS apps. I am one of these people.
Firstly, let me make a sweeping statement, Google did this because no-one used Google Reader itself, or at least not enough users did. People used Google Reader as a syncing solution to power their own preferred apps (mine being Reeder), which means that Google was never going to profit from Reader, because no-one used the web interface, they only exploited the syncing functionality to power paid third-party apps.
I think axing Reader was a poor and short-sighted decision; the reason no-one used it was because it was a clunky, horrible and poorly designed interface, and it was a huge pain to use. Google could have easily updated or improved it and released apps to work in conjunction with the web interface, but they didn’t. Perhaps it’s because they realised people don’t want ads in their RSS reading experience (and Google is practically fuelled by advertising)?
Whatever the reason, it will be gone, but the tech-worlds reaction was just as short-sighted and unhelpful. First people were arguing and forming petitions to stop Google from killing it off, to which I say, ‘let them kill it’. It was a poor product anyway. Then fellow geeks decided it wasn’t so bad and that they should use an alternative, banding about suggestions and compiling lists of alternatives. This was also ridiculous, since all of the existing alternatives are either not that great anyway, or are paid solutions. I paid for my RSS reader apps and they should come with a syncing solution. People generally don’t want to pay twice, and a paid solution is not a replacement for what was once free.
Whatever the new solution is, whether it is a unified product or not, it should be something new and something better. It should be free or very cheap and it should be well designed and usable, both independently and with third party apps. It’s very simple, and nothing does this effectively yet. Come on, let’s not settle for something mediocre here. With Google gone, there is a real opportunity for innovation once again.
Design is a highly competitive and overly saturated industry, and it’s full of people who are ‘bad designers’, whether it’s the boss’s cousin working in an old copy of FrontPage, or the 16 year old neighbour who has just learned HTML. It’s not that design itself is a difficult art, I would agree that anyone can do it, it’s just that designing something well is.
Design is about creating something beautiful and functional, and it’s about standing out. To do that you have to think and develop ideas progressively. There is not a single person in the world who is a perfect designer, to attempt to achieve perfection is to set the bar impossibly high, and there will always be someone who wants your design to look a little differently.
So, how do you define something as being ‘well designed’ if a lot of it is subjective and open to interpretation and opinion? Although there are tens, perhaps hundreds of variable factors that determine the effectiveness of design, the common issues presented in design can be solved by following three basic and logical rules:
- Does it look aesthetically pleasing? It should be attractive and a pleasure to look at, a work of art in it’s own right.
- Is it clear, organised and well spaced? There is such a thing as ‘too much’ design, and it doesn’t matter how attractive something is if you can’t make sense of it all.
- Is it functional? It has to work, and it has to do it’s primary function well. In the case of UX design, do you want to use it and does it feel good using it?
Obviously following those three rules is not a guarantee of producing good design, a lot of it is about intuition based on experience and so the only way to become a better design, is to continue designing, gauge people’s responses, collect data, continue to learn and test everything you do.
The people are “bad designers” are the people who are either running before they can walk, or people who are content with what they do and blag their way through life. If you are not the latter, then it’s best to get some personal projects done, create your own blog and sites, experiment with ideas and branch out at a pace that you feel comfortable with.
End Note: When I say “bad designer”, I don’t necessarily mean someone is bad at what they do. If someone is serious about design, then they are not “bad”, they may just be at a different path in their career. There are people far better than myself at design, and it’s important to be confident in what you do but to also recognise your own weaknesses and areas of improvement. Always strive to better yourself and others, and over time you’ll get there. It’s also important to help others who aren’t quite there yet.
Over the past week or so, questions have been raised over WebKit and whether it’s continued popularity will lead to a web-rendering ‘monoculture’. The concerns have only been spurred by Opera leaving it’s Presto rendering engine in favour of WebKit, and the fact that Mozilla had a good moan about it.
Firstly, Opera is considered the smallest (in terms of market share) of the big 5 web-browsers and with Opera moving to WebKit, it means that 3 of the top 5 web browsers will be using WebKit, leaving only Firefox (which uses Gecko) and Internet Explorer (which uses Trident) as not using it. The impact from a consumer point of view is going to be minimal, arguably unnoticeable. I would say that the average user wouldn’t know the difference and probably doesn’t even understand or care what a layout engine is, nor will they ever need to. Basically, this will only be noticed by the more technical of web users, and more specifically the more technical Opera users.
The transition to WebKit isn’t a bad thing (contrary to Mozilla’s belief), WebKit is an open source rendering engine with many contributors (such as Apple, Google and Nokia), the collective aim of the WebKit contributors is to create a standards compliant and unified rendering engine, it isn’t the product of a sole corporate entity, and therefore the ultimate goal is to create something usable, free from centralised control and profit. That’s not to say that the applications that make use of WebKit are not profitable, but they are a completely different thing.
There are problems with the state of web rendering at the moment, but these can be solved through standards compliancy, the issues faced (such as developers using engine-specific CSS code) are exactly what the WC3 are trying to resolve.
Opera moving to WebKit hasn’t created a monoculture, Opera is not a big enough player to do that. There are 3 rendering engines that are in competition with each other, so the argument about stalling innovation is redundant for the time being.