role="button"
to these elements so the semantics match the visuals.
I'm not going to get into that, but what I'm going to tackle is the approach which is taken when that role
is added.
Normally when we add a role="button"
to a link we add some simple js handling for the spacebar to allow users to activate the control in the same way they might activate a native button. Without this a user hitting the spacebar on a link which looks like a button would end up scrolling the page.
So we end up with something like this:
[].slice.call(document.querySelectorAll('a[role="button"]')).forEach(function (el) {
el.addEventListener('keypress', function (e) {
if (e.keyCode === 32) {
e.preventDefault();
el.click();
}
})
});
But that misses some of the nuances of the native control - something which is often missed when trying to replicate a native element. Native HTML buttons also have the feature where a user can cancel a spacebar trigger (just like you might cancel a mouse-click by pulling your mouse away before releasing it), by hitting the tab key whilst the spacebar is still pressed.
What we actually need to do is listen for two events.
First we need to listen for the keypress
event on the spacebar as above, as this is what allows us to cancel that native scroll which would happen otherwise.
Then we need to listen for the keyup
event on the spacebar. This means that if a user decides to cancel a “click” on our link-styled-as-a-button by using the tab key, we don't trigger it by mistake.
This ends up looking like this:
[].slice.call(document.querySelectorAll('a[role="button"]')).forEach(function (el) {
el.addEventListener('keypress', function (e) {
if (e.keyCode === 32) {
e.preventDefault();
}
})
el.addEventListener('keyup', function (e) {
if (e.keyCode === 32) {
e.preventDefault();
el.click();
}
})
});
See below for the actual tests and detailed breakdown.
So we expected the first set of tests to be equal across the board and that is what we saw. Both handlers displayed the same characteristics as the native button. This proves we have not broken anything with the new handler.
However what we were really testing was if the native tab key interrupt was carried through with the handlers.
It's pretty clear that the single event handler doesn't match the native button behaviour for keyboard-only users, whilst the double handler does a much better job of this.
It's also intersting to see that JAWS and NVDA don't allow for this interrupt but Voiceover does.
In conclusion it seems that a double handler like this provides a better solution for providing the correct behaviour for links with a button role.
This is to test the standard practice of adding a single event listener (for the spacebar) against a proposed new double handler which prevents the standard scroll but also allows for the tab-interrupt.
Move to the control with the Tab key and then activate the control with the spacebar.
This is a bit of a control test to make sure that the double handler works in the same way as the single handler and they both match the native expected outcome.
As expected both the single and double handlers perform the same with the standard test of triggering the control.
Browser / screenreader | Native button behaviour | Does single handler match native? | Does double handler match native? |
---|---|---|---|
Win Firefox | increments counter | yes | yes |
Win Chrome | increments counter | yes | yes |
Win Edge | increments counter | yes | yes |
MacOS Firefox | increments counter | yes | yes |
MacOS Chrome | increments counter | yes | yes |
MacOS Edge | increments counter | yes | yes |
MacOS Safari | increments counter | yes | yes |
NVDA Win Firefox | increments counter | yes | yes |
NVDA Win Chrome | increments counter | yes | yes |
JAWS Win Chrome | increments counter | yes | yes |
JAWS Win Edge | increments counter | yes | yes |
Voiceover MacOS Safari | increments counter | yes | yes |
The second test is to move to the link again but after pressing the spacebar, instead of releasing it, hold it down whilst pressing the Tab key.
This is the main test. We know native buttons have the option of cancelling a spacebar activation by using the tab key. This test checks how our different handlers cope with this and how different browsers and screen-readers handle it.
Browser / screenreader | Native button behaviour | Does single handler match native? | Does double handler match native? |
---|---|---|---|
Win Firefox | stops button triggering | no | yes |
Win Chrome | stops button triggering | no | yes |
Win Edge | stops button triggering | no | yes |
MacOS Firefox | stops button triggering | no | yes |
MacOS Chrome | stops button triggering | no | yes |
MacOS Edge | stops button triggering | no | yes |
MacOS Safari | stops button triggering | no | yes |
NVDA Win Firefox | increments counter | yes | yes |
NVDA Win Chrome | increments counter | yes | yes |
JAWS Win Chrome | increments counter | yes | yes |
JAWS Win Edge | increments counter | yes | yes |
Voiceover MacOS Safari | stops button triggering | no | yes |
Successful activations will increment the counter on the control.
The tab stop link between examples are just to separate the tests.
Test 1 expected: button is activated.
Test 2 expected: tab key prevents spacebar from actioning the button.
Tab stop (ignore, just for testing)
Test 1 expected: link is not activated, page is scrolled.
Test 2 expected: link is not activated, page scrolls and focus moves to next element.
Tab stop (ignore, just for testing)
Test 1 expected: page is not scrolled, link is activated.
Test 2 expected: link is activated (multiple times) and focus moves to next element.
Tab stop (ignore, just for testing)
Test 1 expected: page is not scrolled, link is activated.
Test 2 expected: link is not activated, focus moves to next element.
Changes to the new criteria following feedback after the draft stages has been taken into account and a new candidate recommendation was released in Jan 2023. This is now being looked at by the accessibility community for the ability to test, ease of understanding and impact for users. It is likely that the final recommendation will be released in the third quarter of 2023 (updated 25 April 2023).
WCAG itself is not a law but a series of guidelines. However it is referenced by other laws and is likely to be made reference to in legal cases.
WCAG is currently referenced in the Public Sector Bodies Accessibility Regulations (and the European equivalent) as the minimum standard to be reached. That 2018 regulation has already been amended (in 2022) to prepare for the new release of the guidelines. The new statement now reads (emphasis mine):
A website or mobile application of a public sector body will be presumed to be in conformity with the accessibility requirement to the extent that the website or mobile application conforms to Level A and AA Success Criteria as set out in the Web Content Accessibility Guidelines recommended by the World Wide Web Consortium, as amended from time to time
As such as soon as the new criteria are published they become enforceable.
Once the final recommendations are published there is likely to be a 12 month grace period for UK public sector sites to ensure they are compliant. The GOVUK Design System team will need to update their components and release them - they have a goal to do this within 6 months of release. This will give public sector teams an additional 6 months to apply those changes.
UK government sites will being to be monitored for compliance in 2024.
However it would be beneficial for site owners to begin to review their designs now to see where the new criteria might affect them and begin to plan accordingly.
Most of these changes are likely to affect more commercial sites so if you are working on non-governmental clients there is likely to be more work required. However even if you only use government department components there is still work you can do to prepare.
Two things have changed - Parsing has been removed (the first time this has happened) and the status of Focus Visible has been upgraded to level A instead of level AA.
There has been a lot of debate around this, but at the moment it looks like it will be removed because any failures caught by it currently are also covered by other criteria. This criteria often causes confusion around what is or is not classed as a failure vs just bad code. Much improved parity between browsers in how they handle poor html means its impact can be anticipated now whereas before it was often a different outcome for different browsers.
For example a duplicate ID might be an issue only if it is being to reference an element for accessibility purposes. Removal of this criteria means the confusion is removed and we can concentrate on the actual impact on the user.
This doesn’t mean you shouldn’t keep checking your HTML validates - it is still one of the simplest ways to check for a bunch of knock-on effects from accessibility to rendering and styling.
As WCAG is mean to be backwards-compatible and this is the first time a criterion has been removed, it’s not sure what effect, if any, this will have on things like the ISO (which is still using WCAG 2.0 as a benchmark).
This criterion was rated AA under WCAG 2.1 but has been moved to A under 2.2
This is the most wide-ranging change as it is likely to impact most websites and even challenges some of the more basic styling techniques used for focus indication.
There is already a criterion called Focus Visible, but the requirements for it are quite minimal:
Any keyboard operable user interface has a mode of operation where the keyboard focus indicator is visible.
There wasn’t any minimum requirements of what that indicator should look like - is a thin border ok? visible to who - someone with good eyesight?
This new criterion looks to pin that indicator down with a series of rules around what the indicator should looks like. It boils down to these rules:
There are however details of this criterion which are causing confusion in the community around how these values are calculated for specific use-cases. Also, how would this impact the standard application of a text-underline on focus? That underline is generally set to just 1px which is below the threshold for passing this criterion and places a lot of sites at risk of failing this straight away.
So at the moment this criterion is marked “At risk” but is unlikely to be changed at this stage so will either be implemented as it is or get pulled before publication. If it gets implemented expect a bit of confusion around how this should be tested and if something passes it or not.
Even if this criterion does not make it into the final release, it is worthwhile revisiting your product’s focus indicators to see if they really are as visible as they could be.
Alistair Campbell, part of the WCAG editorial team, has a good breakdown of the latest version.
Update: 22 March 2023 this looks to have been dropped from AA level to AAA level. This is a disappointing move as it leaves us without a clear definition of what visible actually means at the commonly used compliance levels of A and AA, despite Focus Visible still being a AA criterion.
Understanding Success Criterion 2.4.11: Focus Appearance
Still looking at improving how focus is handled in WCAG, this criterion ensures that after you have created a visible indicator, that you don’t then have something obscuring it entirely such as a non-modal dialog, a sticky navigation or cookie banner when it is focussed.
Think of when you tab down a page and the navigation stays fixed to the top of the browser window - we want to ensure the item with focus on the page is not covered by that navigation.
Understanding Success Criterion 2.4.12: Focus Not Obscured (Minimum)
This is the partner to the above criterion. Being the AAA version this is more strict and requires that no part of the focus indicator is obscured.
Understanding Success Criterion 2.4.13: Focus Not Obscured (Enhanced)
This criterion is designed to help users who have difficulty with precision actions or who have to use non-standard input methods (such as eye-tracking). If an interface component requires a user to use a dragging motion to complete an action, then there must be a way of completing the action without dragging using a single pointer (ie not multi-touch gestures).
An example might be dragging a 3D visualisation of something to see it from all angles. Having arrow buttons would allow a user to be able to accomplish the same.
Another example is being able to drag items in a list to sort them, but also being able to click on an item in the list and have arrows appear to allow the item to be moved up and down.
A partner to this criterion is the existing 2.1.1 Keyboard but it was found there are instances where keyboard equivalence does not always equate to single pointer equivalence so a separate criterion was needed.
For example think of a horizontally scrolling component. It can be swiped (multi-touch) and keyboard can access it (because of focusable items inside it), but users who use a head-pointer will not be able to scroll it. This criteria catches these cases which would otherwise fall outside WCAG.
This new criterion also helps cover off touch experiences more comprehensively - but remember people on mobiles might want to use an external keyboard and those on desktops might zoom in and trigger a mobile view. It’s often best to consider viewport and input modalities as unrelated.
Understanding Success Criterion 2.5.7: Dragging Movements
Target size has been part of design best practice for a long time and is covered currently by WCAG 2.5.5 (AAA) which requires a target of at least 44px by 44px (which is what Apple and Google recommend for their mobile interfaces). This new criterion adds a new minimum requirement at the AA level of 24px by 24px. Because of the AA rating this now comes under many of the legal requirements of various countries.
The same exemptions apply to this criterion as apply to the AAA one:
But there is an additional one for this new criterion around spacing:
Teams should still look to meet the AAA standard where possible (especially where the platform states it in their documentation as Apple and Google do), but this criterion does at least make everyone look to ensure they are doing the minimum.
Understanding Success Criterion 2.5.8: Target Size (Minimum)
In order to make interfaces more consistent and help users find help when they need it (often a stressful situation), where a site displays certain help features, these should be present in the same place in relation to other content, on each page.
The features this applies to are:
Chat interfaces are especially useful as they allow the user to retain the current context whilst interacting with the source of help and allow the user to use their own vocabulary rather than the site’s.
This doesn’t mean the site has to provide these options - just if they are that they are presented consistently.
As the user moves through the site those contact features should be in the same relative visual location so the user knows where to look. If the user changes the screen size then all pages should act in the same way in where the contact options end up.
Understanding Success Criterion 3.2.6: Consistent Help
This criterion is designed to help users log into sites as smoothly as possible. Having to solve a puzzle, transcribe information (unless you can copy & paste) or remember usernames and passwords can be a real struggle for many users - these are classed as a cognitive function test.
Username and password fields which are marked up correctly (and don’t prevent copy & paste) can allow the browser or password managers to fill in the fields on behalf of the user which would then pass this criterion. Note personal information such as name, email address or phone number are not classed as a cognitive function test (though not allowing those to be auto-filled would fail WCAG 1.3.5 Identify Purpose).
Old-style bank authentication of the style “enter the 4th 6th and 8th characters from your password” would also fail this criteria as it requires transcription and does not support copy & paste or autocomplete.
reCaptcha are exempted because the objects displayed are classed as “familiar” objects (think traffic lights), despite there being obvious issues with many of the source images and names being pulled from the US. Note those old-style Captchas which had a deformed word you had to recognise would fail this because it is not a familiar object and requires transcribing.
This criteria also applies to 2-factor authentication (it’s no good having an accessible first step if you then have to solve a puzzle on your phone) and username and password recovery (an essential part of authentication).
Understanding Success Criterion 3.3.8: Accessible Authentication
This is a stricter version of the above. It removes the object exemption (and so the option for reCaptcha).
Understanding Success Criterion 3.3.8: Accessible Authentication (No Exception)
On the face of it this is a simple criterion - don’t ask the user the same questions you already have the answers for, or at least tell them what they entered before so they can select it as an option. For example “you said your email address was x would you like to use that?“ rather than “enter your email address“ again.
What we are trying to avoid is the user having to enter the same data they already have and either getting a mismatch or having to use additional effort to enter or recall the data they previously entered.
Note - browser autocomplete is not a sufficient mechanism to pass this criterion.
Information previously entered by or provided to the user that is required to be entered again in the same process is either auto-populated, or available for the user to select
However the definition of a process says also can run across different domains. This paragraph from the guidelines could have an impact:
so if a check-out process includes a 3rd party payment provider, that would be in scope
So if the payment provider were to ask for an address which the user had already entered as part of their checkout process, this must be passed through to the provider page.
]]>name
attribute on form fields). It is what is exposed to assistive technology, for example it is what is read out to a screen‐reader when they access the element, and it is what a speech recognition user would need to say to access that element on the page.
An example is the accessible name of a link. This can be, at its simplest, the content enclosed by the link tags. The accessible name of the link below is "Bob":
<a href="">Bob</a>
You can see the accessible names of elements in most browser developer tools by inspecting the element and checking the accessibility panel.
But an accessible name can also be created from other things, like the relationship generated by a for
attribute. The following input's accessible name is generated by the label. The accessible name for the input is "Your name" and this is what will be announced to screen‐reader users when they land on it.
<label for="name">Your name</label>
<input id="name" type="text" />
So we know accessible names can be assigned using a few different methods. Because of this we need some way of deciding which one wins out, especially if they are conflicting.
How an accessible name is computed is subject to a hierarchy of checks against the existence of various attributes, each one potentially overwriting the others (you can even see this order represented in the devTools display).
The order is in descending order of priority (so aria attributes win over everything else):
I included placeholder and title in there for completeness, but please don't use these for anything where you need the users to actually read these attributes' content as both placeholder and title attributes have usability and accessibility issues. I'd go as far as saying just never use them.)
So if you take the link from our first example and add an aria-label
attribute:
<a href="" aria-label="Tom">Bob</a>
that will overwrite the contents and the accessible name is now “Tom”. Note that the visible name will still be “Bob” as we are only changing the accesible name.
Similarly if you take that link and add anaria-labelledby
attribute it will trump all of the others:
<a href="" aria-label="Tom" aria-labelledby="name">Bob</a>
<div id="name">Kim</div>
The link’s accesible name is now “Kim” despite the other changes still being present.
Where none of these ways of assigning an accessible name are present, the browser cannot calculate and accessible name and nothing is returned.
This is most commonly found where form inputs are missing the connection with their label, or the one I see most often is on buttons.
For example, if the button doesn't have any content because a background image, an icon font, or an image with no alt text is being used to display a visual-only message. You might have seen this yourself in a mobile menu (“hamburger”) icon, or a carousel control like the one below:
<button><img src="rightArrow.png" /></button>
To a screen‐reader user this will just be announced as "Button" so they will have no idea what it does.
For a speech-recognition user they could try a few words based on the visual appearance but will likely give up and resort to other means.
If you happen to have used an embedded SVG for that image, you might have inadvertantly provided an accessible name, and it could even have the opposite effect to the one you might have wanted, like the example below:
On Autotrader’s homepage they have a carousel which uses an SVG icon inside the directional buttons.
<button type="button" ...>
<svg xmlns="http://www.w3.org/2000/svg" ...>
<title>chevronLeft</title>
...
</svg>
</button>
Unfortunately on this page an accessible name was not provided for the button. However because the SVG had a title attribute, this is what is exposed as an accessible name (as it counts as Contents as far as accessibility is concerned). The “chevronLeft” title from the SVG whilst meaningful to the designer who exported it, is not of any use to the end-user, not least because it suggests the arrow points in the opposite direction when it has been rotated using CSS for the right-facing button.
For those wondering, yes the left button uses the same SVG and the image rotated for the opposite direction, so both left and right buttons on the carousel had “chevronLeft” as their accessible name.
What they should have done in this case was hide the SVG from assistive technology using aria-hidden and provided a suitable aria-label on the button itself.
<button aria-label="Next" type="button" ...>
<svg aria-hidden="true" xmlns="http://www.w3.org/2000/svg" ...>
<title>chevronLeft</title>
...
</svg>
</button>
So accessible names are very important for users to understand what the various controls on the page do and having the wrong ones can even cause the user to do something they didn't intend.
This does mean that you can do things like override the visible contents of the element with a different accessible name, which can lead to issues.
The following example is the code for the button from the BBC screenshot above (since I took this screenshot it has been fixed, although it took a redesign for this to happen).
<button aria-label="Next">Continue</button>
The visible content is "Continue", but the accessible name is "Next" as the aria-label
has overridden the content‐derived accessible name.
For a screen‐reader user who can see (many screen‐reader users are not fully blind), this can be a bit confusing as the visible and audible messaging is different.
For a speech‐recognition user it is frustrating as they will try to say "Click 'continue'" which will not work because the software has been told the accessible name for this button is "Next". In this situation the speech-recognition user will (probably after a few tries) resort to asking the software to number all the controls on the page to allow them to select the correct one and continue.
I've seen worse examples, where the meaning of the visual label was completely different to the accessible name - for example using an aria-label
of “cancel” on a button with the visual text of “continue” - and would have caused serious issues in a user's journey.
WCAG even has a criterion to cover this called “Label in name” to ensure the visible text is included in the accessible name.
But occasionally it can be useful to have a different accessible name from the visible one.
If you have a number of links which have similar functionality, but where adding unique visible text to each item might make the interface cluttered or confusing for other users, you can use certain techniques to provide different values to different groups of users.
For example, a page where you have lots of article summaries, each with a “Read more” link. The issue here is that a screen‐reader user would not be able to distinguish the purpose of the links when viewing the links in isolation (this is one way screen‐reader users navigate a page - for more on this take a look at my guide on the JAWS screen-reader).
<h3>Dynamites awards in pictures</h3>
<p>Teaser content ...</p>
<a href="">Read more</a>
They will see a list of "Read more" and will have to go into the content to find out which one they need to follow to get to the article they want:
Read more
Read more
Read more
But making the links more visually contextual like “Read more Dynamites awards in pictures” would make the interface overloaded and repetitive, helping one group to the detriment of another.
We need a way to satisfy both and one way to do this is with an aria-label
to modify the accessible name, like so:
<h3>Dynamites awards in pictures</h3>
<p>Teaser content ...</p>
<a href="" aria-label="Read more Dynamites awards in pictures">Read more</a>
This keeps the visual the same but makes the links make sense to screen‐reader users as they will see a list of links like this:
Read more Dynamites awards in pictures
Read more Voices of our veterans on Remembrance Day
Read more Robin helps OC deliver on hybrid working
It also still works for speech recognition users. As we have kept the visible label in the accessible name, the user can still say "click 'read more'" and the software will place a number next to each link to allow them to pick the one they want.
You can even reuse existing content to build an accessible name by using aria-labelledby
. This takes a list of IDs of elements and constructs an accessible name based on the order of the IDs.
This allows you to re‐use content (with all the advantages and pitfalls that gives). In the example below I'm creating the same accessible name as we have just done, but using the h3
and link content via aria-labelledby
to concatenate them into one phrase.
<h3 id="article1-title">Dynamites awards in pictures</h3>
<p>Teaser content ...</p>
<a href="" id="article1-more" aria-labelledby="article1-more article1-title">Read more</a>
Bear in mind that any use of aria requires more testing with screen‐reader and voice-recognition software to ensure it works as expected. If you can get away without using aria, that is often the better option.
Actually what I'd recommend is using some extra HTML and CSS to achieve the same result, like so:
<a href="">Edit<span class="visually-hidden"> thing one</span></a>
This uses a CSS class which hides the content visually, but crucially not from assistive technology, and achieves the same effect as the aria-label
whilst being a more robust solution.
The critical thing with speech recognition users and editing an accessible name to add context is that only added to the end of the visible label. Speech recognition software varies in ability to search for content and most don't respond well to the visible content not being the first content in the accessible name.
For example if you have a link like this:
<a href="">change name</a>
and you want to add some context, don't do this:
<a href="" aria-label="Bob change name">change name</a>
or this
<a href="" aria-label="change Bob's name">change name</a>
as most speech recognition software won't make the match when a user says “Click ‘change name’”.
Instead add the new content to the end of the visible label:
<a href="" aria-label="change name for Bob">change name</a>
Accessible names are how you communicate the meaning of the interface to your users and getting it right is really important.
If you have any doubt as to what the accessible name is for an item, most browser developer tools now have an accessibility panel which shows you the computed value. However testing with screen‐readers and speech recognition is still necessary to ensure the correct meaning is conveyed.
]]>If the user can't make out the text because it falls under the required contrast ratios, it is effectively not there for that user.
Making colours accessible can sometimes be seen as an unwanted constraint on creativity, but there are so many colours available that by seeing accessibility as a design philosophy it both reduces the possibilities to a more manageable number and gives a reason for using a particular shade over another.
For example, Slack used to have a palette of 132 colours until they did an accessibility review which enabled them to reduce it down to a much more manageable 18 colours.
One of the most talked about is colour contrast. At its most basic this is making sure that a colour is clearly distinguishable from the surrounding colour so the item is clearly perceivable to the user. For example, making sure a font colour has a good contrast against the page background (anyone else remember that phase of web design where everything was small grey text on a white background?).
Contrast is important not only for users with reduced visual acuity, but good contrast helps when screen brightness is reduced (for example with battery saver options) or when viewing a screen in bright sun.
Colour contrast can be measured as a ratio and this is what you will often cited in accessibility reports or on contrast checkers, but generally you want at least 4.5:1 for text content and 3:1 for non-text content (to meet WCAG AA), but bear in mind these are the minimum and you want to be far exceeding these values (the higher the first number the better as the more clear it will be to the user). The WCAG AAA ratio for text contrast is 7:1, so aim for this instead.
Remember these ratios are also important for the various states of the items, so consider colour contrast when looking at focus and hover states too. You don't want a link which appears to disappear when the user tabs to it, and you also want the various states to be obvious when triggered.
Whilst contrast is most commonly linked to text it is also crucial for interface elements. Well defined borders on things like form inputs ensure users can perceive the element itself, important when there is no other indication (like text for a button).
Also pay attention to mixed colour backgrounds, such as when text is overlaid on images. Even if the component is designed with a more subdued image to allow the text to have good contrast, be aware that as the layout shifts based on the viewport, the position of the image in relation to the text may also move. Where the text once overlaid a contrasting part of the image may no longer be the case and the contrast may have suffered.
Colour as information Colour is also something which shouldn't be used in isolation to convey information - for example saying "credits are shown in green and debits in red", or just using a red outline to mark an error on a form input is not helpful. As not everyone perceives colour in the same way (various forms of colour-blindness are very common) it is important that text labels are also used to present the meaning, and it is the meaning which is referred to rather than the colour.
Similarly, just identifying links with a colour can leave users struggling to identify them, so consider keeping the underlines unless it is in a section of the page where it is obvious it is a link (such as a navigation area).
As such, colour should always be a secondary consideration. The simplest way to check something still makes sense is to view it in greyscale which you can do using Chrome's rendering options in devTools, or in MacOS's colour filters (under Accessibility in System Prefs).
Speaking of colour-blindness, there is a misconception that red/green colour-blindness just means you can't 'see' the difference between red and green. In fact it causes issues with any colour which uses red or green as a constituent part, for example purple. So the likelihood is many colours will actually look different to a user with this condition.
Also bear in mind that colour-blindness is not binary but has a spectrum of severity in the different types. There are emulators such as in the Chrome developer tools Rendering panel, but these show examples only (I've not found two emulators which agree on how something would be perceived). What they do show however is how different types of colour-blindness can also affect how effective colour contrast is, another reason why you want to make your contrasts as high as possible.
Bear in mind things other than the very common colour-blindness can affect colour perception. Most operating systems now include bedtime routine modes which change the colours emitted after a certain time to reduce blue-light exposure.
Colour is sometimes used as a background to help define an area. Even where this background has good contrast against the surrounding area, because background colours are not included when using an inverted colour setting like Windows High Contrast this can be lost and important visual grouping lost.
To avoid this, use a transparent border on the element which will be highlighted in the inverted colour scheme to define the area in an alternate but just as effective way.
As mentioned, some users have problems using sites which have acres of bright white backgrounds, so look at the possibility of implementing a dark mode as an option. Similarly, some users may struggle with a dark scheme so allow them to choose a lighter option. These can be tied to the operating system's user preferences setting, but should always include a toggle to allow the user user to override this for the individual site if desired.
Pure black on white can be difficult for some users with dyslexia, so look at using off-white and dark grey instead.
Finally, colour can also draw attention and distract or simply be overwhelming, so be aware of how this might affect users.
]]>It’s certainly understandable as most times you might see people using a screen‐reader they will have the speed turned up so high it is difficult to follow and they might jump around the page seemingly randomly.
But, for the most basic of testing, a simple top to bottom run through the page, you only need to know one or two keys commands.
Before we get into starting up our screen‐reader, let’s think about how a screen‐reader will move you around the page.
The first thing you might want to know when you land on a page which you can’t see, is “am I on the right page?”. To help with this the first thing you will hear when a page loads is the contents of the page title element. This is why having the title contain the purpose of the page (ideally the same as the page's h1 ) as well as the site's name is a great idea.
Yes, screen‐readers make use of the keyboard to navigate, but put away thoughts that you will be moving about the page by hitting the tab key. Doing that will only let you hear the interactive content (links, form inputs, buttons etc), so you wouldn't really be able to listen to an article by tabbing around.
Instead you’ll be using the arrow keys to move up and down the page. As you move you will see an outline around content and this is a visual representation of what’s called the virtual cursor. This allows you to move by blocks of text, reading it aloud as you go, but will also read out extra information when you get to buttons, form inputs or other components.
As you move around the page you will see the browser focus (that focus state you get when you tab about with a keyboard) move with you when you reach an interactive element like a link, but then get left behind as you carry on using the virtual cursor.
As you move about with a screen‐reader you will notice that the reading order follows the code order. To a screen‐reader user the page is always one long tube of content - the visual layout doesn’t really matter - which is also why “directional content” (where you might refer to content “over to the right”) is discouraged (although “above” or “below” is generally ok).
You might hear some content read out differently to how you might expect. In general, don’t worry about this (or check with an experienced user) as screen‐reader users can have different settings depending on their preferences. In particular reference numbers is one area where we see developers trying to force the screen‐reader to read it out in a particular way, not realising that all screen‐readers allow users to move character-by-character if required, so they can easily check the exact details if they want.
If you find yourself thinking more content is needed to help screen‐reader users understand an interface, the chances are they either don’t (again, check with an experienced user), or that content would help all users, or the interface may need to be reworked.
If you are a Mac user then you already have a screen‐reader installed called Voiceover.
Most screen‐readers work best with specific browsers and in Voiceover’s case that is Safari.
You use Command + F5 to start (and importantly to stop) Voiceover.
If you are on Windows, whilst there is a pre-installed screen‐reader called Narrator, it doesn’t have heavy usage so for testing I’d recommend using NVDA, which is a free download.
Most screen‐readers work best with specific browsers and in NVDA’s case that is Chrome or Firefox.
When you start NVDA it will appear as a tool tray icon near the date and time, you may need to expand the tool tray to see it. You stop NVDA just like any other application.
Clicking on the icon will allow you to access the various settings.
Go to “Preferences”, then “Settings”, then “Speech”. Try out some of the synthesizers from the drop-down. Different voices using that synthesizer are available from the voices drop-down below it.
I find the Windows OneCore Hazel voice to be quite good, but try a few out and find one which you are happy listening to.
“Preferences”, then “Settings”, then “Browse Mode” and uncheck the “Automatic say all on page load” option.
To turn this off go to “Preferences”, then “Settings”, then “Mouse” and uncheck “Enable mouse tracking”.
We’ll pick a simple (and well built) site to test our new screen‐reader skills. Open GOV.UK in the recommended browser and turn on your screen‐reader (you can use your mouse to place focus on the page, but once you have done that just use the keyboard).
The first thing you will notice is an outline around the element the screen‐reader is focussed on (remember this can be anything, including the whole page). This is the virtual cursor and will show you what the screen‐reader is talking about as you move about.
Before we start moving about, one useful key command to know is Ctrl - this stops the screen‐reader talking, until you navigate again. It can be very useful!
Let’s try just moving through the page.
If you are using Voiceover you will use Ctrl + Option + right arrow to move down the page (note, if the black outline is on the whole page, you might need to enter the page first by hitting Ctrl + Option + shift + down arrow - don’t worry, that’s the most complex combo you will need!). To move back up the page switch out the right arrow key for the left arrow key.
If you are using NVDA you just need to use the down arrow. To move back up simply use the up arrow key.
That’s it - that’s the basic navigation for using a screen‐reader!
This is all well and good, but it can take a long time to read through all of a page like this. So let’s add in one shortcut, navigating by headings.
For Voiceover you can use Ctrl + Alt + Cmd + h to jump from heading to heading, NVDA users have it a lot easier as they can just hit the h key to do the same. Add in the shift key to reverse the direction.
You will hear the heading level read out as you do this and this highlights the importance of getting your heading hierarchy right as navigating by headings is a good technique for someone to get a mental layout of a page and its content and how it all relates to each other.
Now you have started with your screen‐reader of choice, carry on learning the basics - I wrote some screen‐reader beginner’s guides (for NVDA, Voiceover and JAWS) which take you through everything from how to understand what the screen‐readers are saying, to viewing lists of links and headings, how to navigate tables and more.
]]>If you are a Mac user, oddly tabbing to navigate links and form controls in a webpage is turned off by default. This affects Firefox and Safari.
To fix Firefox you need to go make a change in the Mac system settings.
Before OS 13 it is in System Preferences > Keyboard > Shortcuts
and check “Use keyboard navigation to move focus between controls”.
For OS 13 on it is in System Preferences > Keyboard
and toggle on “Keyboard navigation”.
For Safari you need to make a change to Safari's own settings. In Safari > Settings > Advanced
, check the option “press Tab to highlight each item on a web page”.
Use the tab key to navigate around just the interactive elements on the page. You can use enter or space to activate buttons and enter to activate links. Some components might require arrow keys - radio button groups show focus on tab and then you can use space to select that radio or use arrow keys to move up and down the list.
Check if there is a skip link
- an in‐page link at the top of the page which takes you to the main content and means you don't have to tab‐key your way through all the primary navigation on every page.
Do all the things which allow interaction (link, buttons, form elements etc) get focus when you tab to them?
Can you reach everything you can interact with using a mouse with your keyboard instead?
Keyboard use relies heavily on being able to see where the focus currently is. If this is taken away by css or obscured (or just forgotten about) it can make it very difficult to work out where you are on the page. Making focus states really obvious (and importantly contrast well against non-focus states) helps as focus can jump across large sections of the page.
Is there a good focus indicator for all the interactive content? Something which catches your eye easily so you can find it on a page.
If you can't see where the focus went, you need to check if this is just down to a poor focus state or because the focus has gone to content which has been hidden from view - something you will often find when content has been hidden off-screen until triggered (like a menu).
Make sure you check the page in both desktop and mobile/tablet view. External keyboards are commonly paired to smaller devices and desktops can display the mobile view when the page is zoomed. Keyboards are often ignored when designing for mobile unfortunately.
Does the order in which the links, buttons etc get focus make sense? It is all too easy to use css like grid to change the visual order of content, or use html like tabindex
to force the focus order without checking how this affects other users. A focus indicator jumping around a page haphazardly can be confusing or frustrating for a user.
Keyboard traps, where you find you cannot tab out of a component are not something you normally want. But occasionally they are desirable, like in the case of a modal dialog (often visually indicated by a dimming of the page behind it). With a dialog like this you want to keep the user within the dialog until they make some sort of decision and not to be able to break out and navigate the page behind.
You can test focus order visually using Firefox’s tab order option in developer tools. This will overlay numbers on the page showing where the focus will be placed.
Overlays like Firefox’s are good but it can be tricky to track down a disappearing focus, especially in a crowded interface. You can be more targeted by using activeElement in the console. In Chrome’s console select Create live expression (the eye icon) and add document.activeElement
. This will create a snippet which will follow focus in the page and allow you to both see where the focus is and what elements it is actually focussed on.
By this we mean, when focus is moved (not by the user), how is it handled? For example, clicking on a close icon in a dialog means you will be removing that icon from the page (as it is part of the dialog), so where does the focus end up? Browsers do try to help with this when an element is removed, but as this doesn't account for screen‐readers, the focus needs to be managed to make a better user experience.
Screen-magnification users can also set their view to follow the focus, so ensuring focus is managed well is also very important for these groups.
In our dialog example we'd want focus to be returned to the thing which triggered the dialog, or if the dialog wasn't triggered by the user clicking something, back to whatever the user had focus on when it appeared. The trick is to try and be logical about it and keep it close to where the user's attention is.
The more dynamic the page the more focus management will be needed and the more important keyboard testing becomes.
As most developers build and test with a mouse it is all too easy to forget that not everyone can (or wants to) use one. So you can find buttons which only respond to a mouse click, effectively blocking a user from continuing.
So, can you complete the task with just a keyboard? At the most basic level this means can the user activate all the links, buttons and other interactive elements (including things like video players) on the page and complete any forms.
If you have any scrollable areas then make sure these can gain focus themselves to allow a keyboard user to be able to scroll the content, otherwise the content inside might not be reachable.
]]>However if not done well this can have a deterimental impact on other users, in-particular speech-recognition users.
If you are reliant on speech-recognition software your primary interaction model is likely going to be saying "Tap (or click) [name of the thing on the screen]". So if a button has the text "Submit" on it, you might say "Tap submit" and the software will action the button.
Where this becomes more tricky is where the accessible name (the thing the speech-recognition software is waiting to hear) has been modified and no longer matches the visible name. This is exactly what we are doing when we add screen‐reader "only" contextual information to a link or other element.
General guidance is always to append the hidden copy to the visible copy to make it less problematic for speech-recognition users. But how exactly do other placements of this new copy affect the speech-recognition user's experience?
What I tested with:
Dragon seems to find the best match, front-loading the wording vocalised by the user. If saying "click account" it would pick the 3 links on the page with "account" in the visible name (but won't also flag the ones with "account" in the hidden copy alongside them). It also can search hidden copy - for example try saying "click of account".
Previous testing also suggests Dragon will search attributes for a match, so it will search in this order, stopping once it finds hits:
On the examples below the phrasing "tap" is used for ease of writing, but this might be "click" instead if the speech-recognition is using mouse-style input rather than touch-screen.
For control purposes. No hidden copy, visible text is the accessible name.
<a href="#">Change name</a>
Software | Result |
---|---|
Voice Control iOS | Actions link |
Voice Access Android | Actions link |
Dragon | Actions link |
<a href="#">
Edit applicant
<span class="sr-only"> of account</span>
</a>
Software | Result |
---|---|
Voice Control iOS | Actions link but only when the text is unique |
Voice Access Android | Actions link |
Dragon | Actions link |
<a href="#">
Edit name
<span class="sr-only"> of account</span>
</a>
<a href="#">
Edit name
<span class="sr-only"> of employee</span>
</a>
Software | Result |
---|---|
Voice Control iOS | Will not action any link, will not number links for selection |
Voice Access Android | Will assign a number to each link for selection |
Dragon | Will assign a number to each link for selection |
<a href="#">
Delete
<span class="sr-only">Lisa's </span>
account
</a>
Software | Result |
---|---|
Voice Control iOS | Will not action link |
Voice Access Android | Actions link |
Dragon | Actions link |
Software | Result |
---|---|
Voice Control iOS | Will not action link |
Voice Access Android | Will not action link |
Dragon | Actions link |
<a href="#">Remove
<span class="sr-only">Bob's </span>
permissions
</a>
<a href="#">Remove
<span class="sr-only">Bob's </span>
account
</a>
<a href="#">Remove
<span class="sr-only">Pete's </span>
account
</a>
Software | Result |
---|---|
Voice Control iOS | Will not action link |
Voice Access Android | Will assign a number to all links for selection |
Dragon | Will assign a number to all links for selection |
Software | Result |
---|---|
Voice Control iOS | Will not action link |
Voice Access Android | Will not action link |
Dragon | Will assign a number to bottom 2 links for selection |
However Voiceover and Safari have now finally implemented some speak-as css support.
This property can alter how screen-readers process content which could potentially be really useful but also possibly dangerous and unwelcome for screen-reader users.
Whilst this is a step-forward in support, Voiceover does however do some weird stuff when you use it on an inline element - for example making the parent element use the same setting, which can be especially problematic when using spell-out
.
Unfortunately NVDA and JAWS are yet to do so anything with it at all.
There are other properties in the spec such as voice-volume: loud
and rest-before
, but the speak-as
properties look to be the most useful in that balance of helping authors get meaning across without negative impact on screen-reader users.
Try out the examples below with your screen-reader to see how the different attributes work.
The following all work in Voiceover with Safari.
They do not currently (March 2020) work with JAWS or NVDA on any browser.
The following number will be read out as normal
1005600
<p>1005600</p>
The following number is using the digits
value.
1005600
<p style="speak-as: digits">1005600</p>
The following number will be read out as normal
10DIG05600L
<p>10DIG05600L</p>
The following number is using the digits
value.
10DIG05600L
<p style="speak-as: digits">10DIG05600L</p>
speak-as:digits
on inline elementsUsing speak as digits on an inline element currently makes it's parent speak as digits also. In the following phrase only the second number should be spoken as digits.
This is a number 90909 and so is 1005600
<p>This is a number 90909 and so is <span style="speak-as: digits">1005600</span></p>
The following phrase will be read out as normal
Melbourne is a city in Australia
<p>Melbourne is a city in Australia</p>
The following phrase will be instructed to be spelled out
Melbourne is a city in Australia
<p style="speak-as: spell-out">Melbourne is a city in Australia</p>
The following reference will be instructed to be spelled out
10DIG05600L
<p style="speak-as: spell-out">10DIG05600L</p>
Note, using spell-out on an inline element currently makes it's parent spell out also. In the following phrase only Australia should be spelled out.
Melbourne is a city in Australia
<p>Melbourne is a city in <span style="speak-as: spell-out">Australia</span></p>
The following phrase will be read out as normal
Melbourne is a city in Australia. It has a population of approx. 6 million people.
<p>Melbourne is a city in Australia. It has a population of approx. 6 million people.</p>
The following phrase will be instructed to be read out with punctuation
Melbourne is a city in Australia. It has a population of approx. 6 million people.
<p style="speak-as: literal-punctuation">Melbourne is a city in Australia. It has a population of approx. 6 million people.</p>
The following phrase will be instructed to be read out without punctuation
Melbourne is a city in Australia. It has a population of approx. 6 million people.
<p style="speak-as: no-punctuation">Melbourne is a city in Australia. It has a population of approx. 6 million people.</p>
Every person of fairly good education and of restless mind writes a book. As a rule, it is a superficial book, but it swells the bulk and it indicated the cerebral unrest that is trying to express itself. We have arrived at a condition in which more books are printed than the world can read. This is true not only of books that are not worth reading, but it is true of the books that are.
All this I take to be the result of an intellectual affranchisement that is new, and of a dissemination of knowledge instead of concentration of culture. Everybody wants to say something. But it is slowly growing upon the world that everybody has not got something to say. Therefore one may even at this moment detect the causes which will produce reaction. In 100 years there will not be so many books printed, but there will be more said. That seems to me to be inevitable.
If you replace book with blog, this chap has the current state of play down to a tee. Even down to the realisation that not everyone writing something is worth reading, but also that there are so many good writers that you just can't keep up.
This totally mirrors my online reading habits at the moment. I've gone from a huge selection of websites in my rss reader to a very select few. Additionally the frequency with which I check them has dropped from checking every day to maybe once a week or fortnight. Mainly I think I have Twitter to thank for this. By selecting the right people to follow I can quickly dip into any breaking industry news or new techniques as it's guaranteed at least one of those people will tweet it. The remainder of the posts in my rss feed can now be saved for later as I know they are unlikely to be time-sensitive. Ironically, as the time I have to catch up on these posts tends to be when I'm away from my computer, I publish them into slim books (via Lulu and only for my use), but at least I know these writings are worth reading.
]]>Now, I’ve used Expression Engine before and this was my first thought when I sat down to put my MG blog together. However it did seem a bit of overkill for what I wanted and I’d been there, done that and wanted to try something a bit different.
As I’d been uploading all my MG photos to Flickr and knowing that their photo description field allows a limited but totally sufficient set of html tags coupled with a nice API (even though it’s not RESTful), I decided to try using Flickr as a blog engine.
Before I go on, take a quick look at the result: IncaYellow.com. [Editors note from 2023 - the site is no longer running on the Flickr API.]
The Flickr API is really pretty nice to work with. However, it can be a touch slow so I added in a bit of server-side caching to save visitors having to wait too long. For example, this is pulling in the individual photo information with a little bit of simple caching:
function getPhotoInfo($p) {
// build the API URL to call
$params = array(
'api_key' => 'YOUR_API_KEY',
'method' => 'flickr.photos.getInfo',
'photo_id' => $p,
);
$encoded_params = array();
foreach ($params as $k => $v){
$encoded_params[] = urlencode($k).'='.urlencode($v);
}
//call the API and decode the response
$flickrurl = "http://api.flickr.com/services/rest/?".implode('&', $encoded_params);
//set cache options
$cachefile = 'PATH_TO_CACHE_FOLDER'.$p.'photoData.xml';
$cachetimelimit = ((60 * 60) * 24); //day
//use the cache if newer than $cachetimelimit
if (file_exists($cachefile) && time() - $cachetimelimit < filemtime($cachefile)) {
$xml = file_get_contents($cachefile, true);
} else {
//get the data and save to the cache
$xml = file_get_contents($flickrurl);
// Cache the output to a file
$fp = fopen($cachefile, 'w');
fwrite($fp, $xml);
fclose($fp);
}
//dump the xml in a variable
if ( $xml ) {
$theStuff = simplexml_load_string($xml);
}
return $theStuff;
}
You can then parse the resulting xml file to pull out the required info. Using a couple of other Flickr API methods and the Delicious API meant I could pretty much reproduce a full-on blog, complete with archives and tag pages.
Now obviously this wouldn’t suit most blogs, but I knew that my posts would be pretty short and always be accompanied by a photo, so I was set.
There are a few enhancements I’ve got in mind for the next few weeks. The main one is being able to add more than one photo for a given post, especially for those more tricky mechanical jobs. This seems like a perfect opportunity to use machine tags to relate photos to each other and by adding something like:
incayellow:post:PHOTOID
to a photo, I could pull in all the other photos to do with that job. Comments are another nice-to-have, so I’ll be building that in too. I’ll also be tweaking the caching timing over time to find the optimal period to keep data for (it’d be nice to have the cache refresh for new posts). The other thing I just haven’t got around to yet is an RSS feed. I could just use the Flickr RSS feed, but that just seems lazy, so I’ll most likely roll my own. Also on the horizon is a 404 page, just because I think you should always have one.
]]>Andy spent some time looking at CSS3 and where certain aspects of it can be used right now. One feature is that of media queries. These are a series of features such as width, height, aspect-ratio and resolution with min- or max- prefixes, which enable the serving of css tailored to specific devices. Andy showed and example where a media query could be used to serve up a separate layout for mobile devices - specifically his iPhone.
Inspired by this demo I decided to reproduce that on this site. My aim was for anyone viewing on a supporting browser to be served up an additional set of styles if their browser-window’s size would normally have given them scrollbars. In practice this meant adding the following code to my main css file where I import my other files:
@import "/assets/reset.css";
@import "/assets/screen.css";
@media all and (max-width: 800px){
...additional styles…
}
This translates as “when the device supports media all and is no wider than 800px, use these rules”. You can try this with Safari (I’ve tested on Safari 3) and Opera (tested on 9.24) , though not Firefox, by resizing you browser to under 800px and hitting refresh. The refresh is required to activate the stylesheets, so it is not as robust a solution as one of the javascript options for this particular use, but it does provide an insight into how powerful CSS3 could be if we could get it out of the gate. The obvious applications are for mobile devices - especially those which don’t identify themselves as handheld, so by-pass the normal for-mobile styles.
Note that trying to add this rule in a <link.. />
tag doesn’t seem to work as it gets applied by Firefox even when not matching the rules, it seems in that state the browser ignores the bit of the rule it doesn’t understand, proceeding to implement the bits it does.
Not content with slapping together a basic site, they have gone the extra mile to ensure as many people as possible can get on board. As well as the standard Flash-powered content - note I say content, rather than site - they have a version running the video via Windows Media Player.
This alone gives them props as Flash is a more responsive video player, but they have also added a video transcript - the first time I’ve seen it done on an advertising site. I’m no expert but the transcript seems very well executed, so I assume they recruited professionals to do the job.
My only niggle would be that the link to the WMV player from the html version opens in a new window whilst the link from the Flash version remains in the parent, but this is small fish compared to the great steps they’ve taken for accessibililty on the rest of the site. The interesting thing is that the transcript actually adds something to the experience for folks who can actually see the video, so everyone benefits.
]]>The general thrust of the book is that the content should probably not be just your standard site (one of 4 possible options Cameron outlines), but rather a separate section or domain with content tailored to the user on the move (mobile referring to the user rather than the device). Contextualising what content people will want from your website via their phone is probably the most challenging part of mobile web design. It is quite possible that it is something which is not present on your primary domain, but this book gives you some good examples to help you get thinking in the right direction and using the phones unique abilities to full effect.
This is not a technical reference, instead focusing on the greater picture, but Cameron gives plenty of links for additional reading, something which is important in what will possibly be many people’s first look at designing for mobile users. In all it was a great read, which I managed within a day, and anyone familiar with Cameron’s writing on his blog will feel right at home. Still not sure? The book site has a sample. Go download.
]]>Whilst reading an article on the NY Times I came across their somewhat hidden dictionary feature. I’d double-clicked on a term I didn’t recognise, intending to do a right-click Google search, when a window popped up. I’d almost dismissed it before I realised it was a ‘feature’.
Apparantly, this has caused some annoyance among those who idly click their mouse as they read, or highlight words as they go, even so far as the creation of counter-scripts and ad-blocking techniques. Plus I bet it confuses the heck out of new-to-the-web people for whom the NYTimes may be the first port-of-call. Whilst I do think it’s a neat feature, love the discoverability of it, and may be great for assisting understanding, perhaps there should be a toggle somewhere in the member centre (I had a look and could see one)?
]]>One of the big announcements to come out of WWDC was the release of Safari for Windows. It’s still in beta but seems perfectly workable, although it is a bit frustrating that to open a new tab you have to remember a key combo or right-click whilst on every other browser it’s a double-click on the tab bar. I think I’ll definitely use it for a while on Windows as it seems to have some neat features, but whether it’ll replace Firefox (with its super-useful extensions) as my default is another thing.
]]>Overall, whilst very enjoyable it came across as a bit confused as to its aim, probably not helped by my internal assumptions tying to it last year’s Carson Summit. Despite that, all the presentations were engaging and well delivered and as always the organisation was flawless, so congrats to all involved. Even the sponsors contributions were more than the normal out-of-sight variety - Microsoft’s crash-out zone with bean-bags, XBox360s and live feed from the stage and Adobe’s provision of the after-event drinks.
For me the highlights were:
Lots to think about. More photos on Flickr.
]]>The spontaneous applause which this got from the floor indicates the understanding of how far behind schools, colleges and even some universities are when compared to best industry practice. I can’t help but agree, but I know it will never happen.
As long as there are people wanting to learn web design there will be educators trying to deliver it, especially with changes to funding models in the UK making every place on a course important. The problem I can see is one of hard-pressed teachers trying to fit a technical and pretty abstract subject into a small slot in the timetable. Of course the easiest thing is to go for the easy win and get the kids using a tool like Dreamweaver or FrontPage to knock something up quickly with no thought of the code behind the scenes. At least that way they have something to stick in a portfolio at the end of the day. And lets not forget that most web design courses come under the auspices of the IT department, when to get the results we aspire to it should be a joint venture between IT, Art and even Humanities.
Teachers who commonly get landed with delivering a web design course typically are ill-equipped to do so at a level which will give their students an insight into the real work going on in the industry. Maybe this is where the industry should step in to help. We could produce lesson plans with accompanying teacher notes suitable for different lengths of course, just as they are available for other subjects. These could be updated with new technologies and changes to standards as unlike most subjects our industry is a fast-paced one which even full-time professionals sometimes have a hard time keeping up with, let alone hard-pressed teaching staff. We could produce some outstanding sample sites which could be broken down into their component parts complete with notes for use as teaching aids. We could even come up with a network of web design firms who would be open to have student placements or come into class for a guest speaker session. I’m sure that with a little effort the firms which put a bit of effort in could end up with an endless supply of new talent from these sources.
Of course it can’t be all one-way. We need the teachers to let us in. Just as we recognise their abilities when it comes to communicating an idea to young minds, they need to see that accepting help from us is not a threat to their teaching skills. It needs to be a partnership else we risk continuing on our current insane course where teachers are delivering outmoded classes to students who won’t know any different until they’re in the industry - if they ever make it that far.
]]>Thursday (5th April) was the first (hopefully annual) Highland Fling. It is really good to see an event in the top half of the country - even if it did mean a 5am start to get up there in time! Fortunately I was on the train with fellow Highland Fling-ers Gareth and Mark who prevented me from falling asleep and missing my stop. A short walk to the Symposium Hall in the Royal College of Surgeons gave us a quick glimpse of Edinburgh before the conference started.
Alan had managed to gather an impressive group of speakers for the inaugrual Fling, with Jeremy Keith kicking off events with an introduction to the concepts of progressive enhancement (via chair design and the zx81). All the speakers managed to pitch their slots at the right level, with not too much in the way of coding tutorials but lots of useful debate and theoreticals. Andy Budd in particular gave a great presentation on CSS3 and why it is taking so long for the next recommendation (CSS2.1) to get out of the draft phase.
Mark Norman Francis covered how Yahoo uses graded browser support, despite suffering from a cold; Christian Heilmann and James Edwards covered Javascript and Ajax from 2 slightly opposing viewpoints - both compelling arguments; Drew McLellan ran through microformats and the potential use as an API; and Andy Clarke finished off with a talk about the future including a reveal of the new Stuff and Nonsense design.
Whislt lunch wasn’t provided, the location of the conference meant there were plenty of places to grab some good food - I can now recommend Black Medicine Coffee Co. which had great smoothies and free wifi. Coffee and biscuits were put on in both breaks and Alan even managed to provide some excellent swag! As usual all the speakers excelled and both they and Alan should be congratulated on a great day out which was fantastic value for money. More photos on Flickr.
Here’s to next year!
]]>If you’ve ever struggled to explain how the web has changed and where it is going to someone, point them at this video. In under 5 minutes it gives you a fantastic impression of what Web 2.0 really means.
]]>WebCards is the latest browser extension which takes advantage of microformats. Built by the chaps from whymicroformats, it exposes microformats embedded in pages - events, contact details that sort of thing. It is still in Alpha/Beta at the moment, but you can sign up for notification when it launches (initially for Firefox) - though I wish they’d get rid of the annoying pop-up about a license requirement just for viewing an information page.
With Firefox looking to embed microformat support in version 3 and Microsoft also taking notice, it looks like this small but powerful tool is about to get a lot more attention.
]]>If you ever have to create HTML emails for clients, the upcoming Outlook 2007 holds a nasty surprise. Microsoft have decided to remove the IE rendering engine and replace it with Word’s rather more poor one. In doing so they are pushing HTML emails back to the days of HTML 4.01 and CSS1 - those specs were last updated over 8 years ago. You’ve got to admit their timing sucks, just as IE finally gets a whole lot better, they decide to rip it out.
Take a look at the supported items and you’ll see there are no background images or css floats and positioning. How does this affect us? Well, it’s kinda like the browser position in the bad old days, if you are trying to push forward with CSS-based emails, you will be in for some headaches as they have made it pretty much impossible to do this. As Joe Hardy notes, even though align is supported, glitches make it useless - and the problems don’t stop there, check out his article for all the nasties. As Outlook is the primary company email client and comes with every Windows PC, it remains the most important email client out there for better or worse. The Campaign Monitor crew have a screenshot comparison of their CSS email newsletter in the Outlook 2000 and 2007 to show the difference the change of rendering engine really means.
Yes we could argue about whether HTML belongs in email, but many people expect it to be there - both clients and customers - that the view of developers is really not an issue. You cannot ignore that a graphical email has a lot more impact than a text-only one (assuming that the email client does not block the images by default, or the customer has the awareness to unblock them). Personally I think the decision to use HTML or text should be ruled by the content. HTML emails are particularly suited to e-commerce where the product is what the customer wants to see - Apple’s emails are a good example of these. Text-only emails are similarly effective where pure information is key - job board updates, blog newsletters and such. Like it or not HTML emails will continue to be requested and Microsoft have now made it more difficult to do this in a way that moves us forward.
They had a great opportunity to use the IE7 engine in Outlook, but instead have forced us to roll back our designs back to either fully table-based or text-based instead of the graceful degradation that CSS support could give us.
]]>Despite Linksys already having brought out a product called iPhone, Apple today revealed their long-anticipated entry into the mobile/smart phone market bearing the same name. The iPhone was among a slew of products the various pundits hoped would be announced in Steve Jobs’ keynote address at the San Francisco MacWorld conference, although pretty much the whole of the keynote was given over to this and the previously announed iTV. This meant no immediately announced updates to hardware (except a new Airport), no iWork 07 and an iTV which had the most basic feature list (although still darn cool and sure to be built upon).
The iPhone looks like a great piece of kit, combining a full web browser (Safari) a mail client, wireless and bluetooth access, photo software (all running on OSX), and of course a video iPod. That last bit shouldn’t be a surprise as it is Apple’s killer app for this type of hardware and the thing they have over their competitors in the smartphone market. I think we’ll see all the major players move to the touch-screen interface in the coming months as Apple’s share of customers increases, though it is interesting they moved away from the clickwheel interaction for the iPod part of the phone.
We may also be seeing the first device which popularises the mobile web with the inclusion of a slightly tweaked Safari browser, giving people a consistent experience as they move from desktop to iPhone as it syncs your bookmarks between the two. This may be the end of WAP 2.0 and walled gardens.
]]>Microsoft have recently updated their homepage (they’re moving the whole site to Sharepoint as part of the change). It is now a fully css-based page, albeit with an HTML 4.0 Transitional doctype.
Additionally some of the code is a little disappointing, for example instead of using heading elements they have employed classes:
<div class="heading">Popular Downloads</div>
Now, I’m guessing the Sharepoint might have something to do with this, in which case it isn’t a great advert for it.
On the plus side, I do like the design as it seems more task-oriented such as the New PC section in the Home User area, though this is let down by the slow response of the navigation, it just doesn’t feel right.
]]>Ryan Carson must be one of the busiest guys on the web. In addition to currently selling off DropSend , launching Hey Amigo and running Vitamin and Carson Workshops, he has taken the Future of Web Apps conference from London into the States. Now he’s expanding his events portfolio to include the recently announced Future of Web Design. How I wish he’d launched that before I’d spent my conference budget for the year!
From the looks of it, the new event will cater for the more UI-centred aspect of web development of typography, layouts and branding (I think I can guess a few of the speakers already), as opposed to the more wide-ranging Apps conference. This is great as the design-type sessions at atmedia this year were heavily attended and showed that there is definitely a demand for this.
We’ve gone from having no large web conferences in the UK to having atmedia, dconstruct and the Future of ... in the space of a couple of years. It’s going to get to the point where I either can’t make them all or have to self-fund my way around. It’s a nice problem to have.
Anyway, if you think you can make it to London in April, go and sign up, registration fr FoWD opens 18th January. If it is anything like last year’s FoWA or if the line-up is a strong as the upcoming one it should be a cracking day.
UPDATE: Unfortunately I’ll be missing this year’s FoWA conference as I’ll be out the country, but I definitely be looking forward to FoWD.
]]>Paul Boag from Headscape produces a pretty regular podcast aimed at the beginner developer and managers. Don’t let this put out off if you don’t count yourself as either of these as Paul covers a wide range of topics and there will generally be something to pique your interest.
The only one I don’t listen to on the move as it’s a video. Patrick Norton and Robert Heron cover all things gadetry from HD tvs to Vista to digital French rabbits. I’m constantly amazed by the knowledge displayed by these two.
A great series of interviews from Brian Oberkirch with some of the web luminaries such as Ryan Carson, Jason Fried and Jeff Veen to note a few recent ones.
Bryan Veloso of Avalonstar and Dan Rubin of Superfluous Banter team up to deliver a great podcast. This normally sounds a lot like a chat over a cup of coffee between two mates which makes it fantastic listening.
Don't let the title of this one put you off, it was named way before the whole O’Reilly thing. Pretty irregular but a good listen as they have a good interview each episode.
The lovely people over at Carson post their interviews as audio files as well as a full transcript. Again great interviews with people like Dan Cederholm of Simplebits and Kevin Rose of Digg.
From the same stable as dl.tv this is a general round-up of all recent technology news
What’s good about all these are that the personalities of the interviewers really comes across in the audio format, meaning that even if all of them talked to the same person it would result in quite different discussions.
]]>Ironically I now can’t find anywhere on their site to retrieve a saved quote.
Not only that but their form validation only validates one field at a time, meaning that if you miss a couple of required fields you can end up having to resubmit the same form several times.
Thank goodness this is a once-per-year event.
Andy Haigh, Insuresupermarket Product Manager, kindly responded to this post:
]]>Adam, I agree with your concern over the password complexity that we use on our Motor Insurance site. In fact we have recently being making changes to improve this and this will be going live very soon now. However, let me explain why we have such tight constraints on our password and what it is used for:
We use the password that the user enters for 2 reasons:
- for the user to retrieve their saved quote from some of the insurers sites
- we also email you a link to our results quote page and the password is used by the user to access this
The issue that we face as an aggregator, is that the insurers use a range of constraints on their password validation. This means that the password our user selects needs to meet the password constraints of ALL the insurers where we pass through the password from our site to the insurers sites. It would be much nicer if the insurers used a common set of validation rules.
The improvements to the password question went live yesterday (16 Nov). We now just use the password so that the user can retrieve their emailed results.
Joe Clark is one of the most enthusiastic people I’ve heard talk about accessiblity. When WCAG 2 came out he was the first to point out issues with the documentation, and must still be among the few who have read all WCAG 2 documents. His commitment is especially evident when it comes to multimedia accessibillity (I still remember him gathering a posse to go and see captioned films after @media 2005).
Now Joe is starting a research project to establish a set of standards for captioning, audio description, subtitling, and dubbing. It will also develop training and certification and even create specialist fonts to use. Now all this costs money and Joe is looking for funding to enable him to go out and get the $7 million Canadian the project will need over the life of the project. He’s after $7,777 to enable him to devote all his time to fundraising for 4 months - so this is subsidence living - and is asking for help from the development community to help achieve this target. When you consider the size of the community and the sum needed I’m sure he’ll have no problem getting there, and having seen Joe’s enthusiasm I’m sure he’ll manage to get the project funding too.
Head over to the Open & Closed Project to find out more and donate.
]]>From the same chaps who’ve recently launched Disco, that smokin’ disc-burning app, comes Checkout. This is a nice looking and well-featured point-of-sale application, including stock and order management. Having used a few of these myself, this seems pretty much sorted from what I can derive from the screenshots/screencasts on the site. It’s really for the smaller businesses at the moment though as you can’t network copies together, so you’d be reliant on one till, however it is ideal for small online stores where all your transactions are postal and can be handled by a single computer.
]]>A nice touch from TextMate - an auto-update gives the icon and project background a Halloween makeover.
]]>From the energetic 'Chaiyya Chaiyya' opening music and Clive Owen's short monologue you know that Spike Lee has delivered with this film.
An outstanding cast is the first thing which strikes you when you first see this film on the shelf - Denzel Washington, Jodie Foster, Clive Owen, Christopher Plummer and Willem Dafoe. If this film hadn't delivered I'd have been mightily disappointed. Luckily I wasn't.
Washington plays Keith Frazier, a hostage negotiator brought in when a bank robbery turns into a seige. Clive Owen shines as the uncompromising gang leader and much of the film takes place within the confines of the bank as the two do their respective 'jobs'.
If I have one gripe it is that Foster and Dafoe don't have bigger parts, but it is a credit to Lee that he managed to get them for such relatively small, unexplored supporting roles.
I won't elaborate on the plot as that would spoil your enjoyment, suffice to say this is a definite recommendation.
]]>My brother dropped by recently on a brief stay from Copenhagen, bringing with him some splendid specimens of Danish beer. Compared to our local ales, these are both in much larger bottles and much stronger - around 7% compared to our native 5%. Needless to say I’ll be saving these for a non-school night!
]]>Fed-up of the trainset? You can now extend it and add geek-points at the same time. Brio have brought out Brio Networkers, characters based on your computer’s innards. There’s an email hub, a search-bot, a CD-burner and a recycle bin; plus of course the enemy - viruses and pop-ups. Bonus is that it links up to existing Brio-style railways so you aren’t starting from scratch.
The website is great too and well worth a look. Who knows, this could give us the first generation wise to the dangers of viruses.
]]>Blogs made more in-roads into the political dictionary this week with the (Beta, of course) launch of webcameron, David Cameron’s new site. It’s only been up a couple of days so posts are still a little thin on the ground and I’m sure he’s still getting used to the idea. Design firm Head London have made a pretty good job of it all, and it fairly sings web 2.0. Everything is there - count them - Tag Cloud, Flickr feed, YouTube-style videos, guest blogs, widgets, PodCasts. Unfortunately there are a couple of non-Web2.0 code issues - a double doctype, inline styles and javascript in the hrefs, but it is light years ahead of what we would normally expect from a politician’s site.
The design is nice and clean, keeping away from the oh-so-obvious party-blue, and giving us a site it is actually pleasurable to be on. There is even a video clip of a conversation between (I assume) a guy from Head London and Mr Cameron regarding the sections of the site which haven’t launched and how they were going to deal with the high volume of comments. Now I would have thought that they would have given this a bit of thought beforehand, making sure that the early adopters weren’t left hanging waiting for replies. I’m sure there are several people employed full-time just moderating the comments, which all have to be approved before publication.
I hope that this site continues to be updated and isn’t left to be another fashion-following fad. Whether this is a good movement for politics itself I’m not sure - is it heading too far towards personality-led politics - but it does make the party more human, something which many large companies have already discovered.
]]>There’s a clever company out there which can create a piece of artwork for your lounge wall based on your DNA. Apparantly they send a DNA sample kit out to you and you returm your DNA from which they make the art, and very nice it is too. But it does seem a lot of work and does cost quite a bit - labs generally don’t come cheap.
So why not get art from something you use every day - your favourite site, or your own site? Here’s where Thomas Baekdal comes in with his WEB2DNA. This parses your site, much like an RSS and allocates coloured lines depending on what markup it finds. Different types of site end up with their own unique artwork with more semantic sites getting brighter piccies than one which uses lots of font tags. Oh, and as Thomas points out, due to the way in which this is all constructed it works best in IE rather than the usual Firefox recommendation.
]]>For a company a large as Dell a blog is a golden opportunity to connect with their customers, find out what they want or what is causing them concern - much more so than any number of feedback questionnaires would. In a blog you can go back and ask for clarification and also be seen to be acting. Dell should be addressing customer support issues which seem to be the main complaint about them. Also they need to ensure their customers can find them - I could see no links to the blog from the main Dell site, even after using their search tool.
Don’t get me wrong, I think it is a huge thing that Dell have decided to start blogging, but I hope they manage to keep it away from the in-your-face marketing people. This could do more for their reputation than their marketing dept could imagine if done right.
]]>There's been a surge in new web development books being announced in the past few weeks, so here's a quick run-down.
A welcome addition, especially in light of the upcoming WCAG 2.0 release. A replacement on the shelves for the good, but aging Accessible Web Sites from the now defunct Glasshaus, it will hopefully be covering the accessibility issues of some of the newer techniques such as image replacement and oh I dunno, ajax? Looking at the contributing author list this looks like it will be a good read.
This is one I'm waiting for, having made my merry way through Jeremy Keith's DOM Scripting (for good, not evil), I've been wanting more, but also wanted to avoid polluting my clean js knowledge with old-school techniques. Ppk says this is definitely a sequel kind of read so it should work out quite well.
I missed Andy's talk at this year's atmedia - damned 2 tracks! - but this may make up for it. It looks to be a 'think past the restrictions of CSS' vibe. Not much info apart from that, so we'll have to wait and see.
Cal Henderson did a great stint at the Carson Summit and I hope this book follows his presenting style rather than the more dry style O'Reilly is more known for.
From all the reviews I've seen this is the book to buy anyone you know who is just getting into web design. Something we've been missing and one I'm going to recommend.
Another Friends of Ed book, this time by Paul Haine and I'm guessing a non-official companion to Andy Budd's CSS Mastery.
A green cover for the new edition. Question is, will it still be referred to as 'the orange book'?
]]>Informing customers is a great reason for a company publishing a blog, especially one as large and as public as the BBC. They've also ensured that they have gathered staff from the web, TV and radio, giving a wide base of topics, each with their own RSS feed.
]]>This was a recommendation which got me past my usual reluctance to look at subtitled movies, and I'm glad it did.
The 36 of the title refers to the Parisian HQ of the Criminal Division of the DRPJ - the equivalent of the UK's Scotland Yard.
The film follows the rivalry of two captains - Vrinks of the BRI (the anti-gang squad) and Klein, the head of the BRB (anti-robbery) - as they bash heads over the hunting of a vicious armoured car gang. It's had its fair share of comparisons to Heat, but 36 doesn't characterise the criminals so much as it concentrates on the darker side of policing on the edge as the two captains race to stop the next hit. Lines get blurred between the cops and the robbers, Vrinks and Klein pushing the envelope (anyone who's a fan of The Shield will recognise this).
Gérard Depardieu and Daniel Auteuil shine as the two leads, presenting the gritty side of Parisian policing. This is bound to be remade into a Hollywood version, hopefully when they do they'll keep the darkness of it all.
]]>I think Patrick has put himself in a bit of a tight spot. Looking at my bookshelf I can see at least 6 books authored by people who presented this year. Making next year's conference as good as this year's is going to be a real challenge. It was really great to see some familiar faces among the crowd and get to meet a bunch of new ones. The conference was split along two tracks which caused a good few people to wish they could split themselves in half and it meant some really tough decisions - how do you decide between Andy Clarke and Tantek Celik?
Anyway here are my best bits:
The WCAG panel encouraging people to actually read the 2.0 draft before making their mind up (and finding out about the quick references they've just put out); though most people agreed having Joe Clark there would have made it more of a debate.
Jeff Veen's 'Next Generation of Web Apps' presentation was the highlight of the whole conference and a great way to finish off day 1. I've always wanted to see Jeff speak as it was his Art and Science of Web Design book which gave me a huge leg-up in web design and he didn't disappoint, giving an inspiring look at the state of the industry, how far we've come and where we might be going next.
The social calendar sprang back into action with Patrick managing to convince the bar to open up for football and after a win and a few drinks it was down to the Texas Embassy for food with Gareth and Olly.
Dan Cederholm kicked of day 2 with a look into making sites bulletproof with examples from his Cork'd site and re-energised everyone after the excesses of the night before.
Cameron Moll told us how to look towards providing content to mobile devices and gave everyone some really useful tips to go away with. Yahoo!'s Nate Koechley's look at how they implemented standards into 3 of their upcoming products shows us how far the community has come in the past couple of years. Chatting to him afterwards about microformats and social tagging, he came across as a really nice guy.
Speaking of microformats, Tantek Celik enthused the whole lot of us with a run-down of what they are and the potential of them - along with examples of them out in the wild.
The final session was a great light-hearted 'hot topics' panel most ably hosted by Jeremy Keith - a great way to round of the day, followed by beer in the pub (CJ - are you going to start importing UK beer to Canada?).
What I noticed this year was a move towards more strategic thinking rather than nuts and bolts stuff. Also the presentations were without an exception amazing - from the crafted slides to the practiced speakers - coupled with a great venue this was a top class event. Congrats to Patrick and here's to @media 2007.
]]>I’ve kept the whole thing pretty simple - the Microformats site has a bunch of examples of microformats in use already and hReview was the obvious choice.
The code runs like this. The hReview only requires the item being reviewed, but obviously for a review to be useful it needs a bit extra.
As the power of microformats comes from them being pulled out of context (eg Technorati’s new search tool), I wanted to attach my name and url to the review. This is done via the reviewer item, wrapped up in hCard.
<span class="reviewer vcard">
<a class="url fn" href="http://www.liptrot.org/about">Adam Liptrot</a>
</span>
The reviewer also has the date of the review assigned to it. Dates are something which have to be machine-readable and so must have a format which is common.
This format is contained within the title attribute of the abbr tag, as it is effectively abbreviating the more readable ‘June 19th 2006’ or even ‘yesterday’. So we add the following after the link within the reviewer span:
<abbr class="dtreviewed" title="20060619">June 19th 2006</abbr>
Then we have the mandatory item which I’ve wrapped in a link to the IMDB site reference (oh, and I’ve added a language tag to the link too, as it’s a French film).
<div class="item">
<a lang="fr" class="url fn" href="http://www.imdb.com/title/tt0390808/">36 Quai des Orfèvres</a>
</div>
Put it all together with a rating and the review itself and we get the full entry.
<div class="hreview">
<span class="reviewer vcard">
<a class="url fn" href="http://www.liptrot.org/about">Adam Liptrot</a>
<abbr class="dtreviewed" title="20060619">June 19th 2006</abbr>
</span>
<div class="item">
<a lang="fr" class="url fn" href="http://www.imdb.com/title/tt0390808/">36 Quai des Orfèvres</a>
</div>
<div>Rating:<span class="rating">4</span> out of 5</div>
<div class="description">
<p>Movie review goes here...</p>
</div>
</div>
At some point in the next few weeks I’ll update my previous movie reviews in the same way.
]]>The MacBook Pro laptops have a motion-sensor in them, designed to prevent damage to the hard-drive when dropped. This has already been put to admirable use for Mac lightsabre fights (I can imagine the calls that would be put into tech repair from people renacting the Darth/Obi Wan fights).
Here's a much more work-friendly use. This bit of scripting uses the motion-sensor to change virtual desktops when the monitor is tapped. The video is just great and it looks so much easier than a cluster of key combos.
]]>Coincidentally, Web Standards Awards is closing at the same time that Stylegala is up for sale. Does this mean that the standards war is won? Well, no. WSA reached the 100-site milestone in January and Stylegala has had a distinct lack of new additions in the past few months. The WSA is much more along the lines of the Web Standards Project, the core of the site is the promotion of standards, with sites judged by the great and the good of web design. The annoucement says it all:
It's no longer a myth that you can produce a stunning site with Web Standards
and the 100 site mark is a good place to bow out. Plus these guys have a lot of other stuff to do.
Job done. War won? Nope, there are still an awful lot of sites out there partying like it's 1999, but the tide has turned. The WSA stands as a showcase to standard-compliant design. Stylegala on the other hand is a gallery, a place for inspiration, although it has actually been more than just a gallery for some time. However the gallery does seem to have fallen by the wayside since the heady days of March 2005 when 20 sites were added. In the past few months only a couple of sites have made it in, despite, as the WSA declares:
beautiful sites with beautiful code are being produced by the hundreds; every month, every week, every day
Maybe the problem is that it is too difficult to seperate out just 10 sites a month to feature - what about the ones that don't make it, are they not worthy? This is probably why many recent sites have been high-profile ones (like Verlee and UX Magazine which everyone can put on a pedestal. Stylegala looks like it has just been overwhelmed.
]]>Browsing around the Apple store website, I came across an interesting example of localization. Localization is the customisation of a product to a particular market, whether that be changing menus or content to a different language or modifying aspects of the prduct to take account of differences in culture.
On the Mac Mini page of the UK Apple store the product description goes “More expandability options and ports”, whilst on the US page it says “More expansion options and ports”. Not sure what cultural differences they are taking account of here, but for me the US description certainly sounds better.
]]>This competition does raise some interesting points though.
The related blog states that :
It’s your homepage, (it’s your BBC) and I want to offer everyone the opportunity to feedback to us what they want it to look like.
OK, but I think it’s a safe bet that the submitted designs won’t have any meaningful user-testing behind them (though I’d like to be proved wrong) and so will refect the opinions of the designer. Granted the BBC might be able to derive some usability metrics from looking at common themes within the designs, such as highlighting a certain section of the site.
Also worth thinking about is that the BBC homepage is an international page. How will the submissions reflect the 2 versions of the page - the UK side and the world side? (Some of these questions might have been answered in the FAQs but the site has been largely unavailable all day.)
I’m looking forward to see the designs come in, might even get around to doing one myself.
]]>David Weiss has a tour of the Mac lab at Microsoft on his blog. What hits you is the amount of testing they do, the huge amount of cabling - and yes the 150 Mac minis they use for automation. Lucky for Microsoft Apple don’t produce that many configurations.
]]>Dang it! As soon as I start getting my RSS reader under control, along come 2 sites which are sure to demand my attention.
From the talented hands of John Oxton comes Bite Size Standards, a multi-author site which aims to give good advice but in manageable “while the kettle’s boiling” chunks. I especially like the further reading references at the bottom of each post to let you delve a little deeper if you’re hankering after more depth.
Then there’s Vitamin, a web magazine from Carson Systems which visually reminds me of UX Magazine, but has a wider remit and appears more hands-on. Vitamin has a stack of great content to launch with, and no wonder, just take a look at that author list/advisory board. I’m off to cull some of my RSS feeds to make room in my day for all this stuff.
]]>I’m with ole Tim Berners Lee on this one. Good on Microsoft for not backing down to patent extortion, far too much of that malarkey goes on as it is, though whether it’ll end up doing them more damage with the community than good we’ll only see later down the road. Eolas, can’t you see you’re antagonizing just about everyone out there?
]]>The white text on black seems to be the trend for 2006 (Verlee, Dustin Diaz).
]]>From the video-demo the web part looks a lot like most other visual development environments for the web, though it appears to have a rendering engine built from the ground-up to be standards-compliant and that isn’t IE7. Why not? Personally I’d still be testing everything in multiple browsers starting with FireFox. Built-in rendering engines just make me suspicious.
]]>To know more about why styles are disabled on this website visit the Annual CSS Naked Day website for more information.
]]>CNN has made a slight shift in th elayout of its US homepage. By moving the navigation to a horizontal type, they have made more room for their video offerings.
Unfortunately they have run out of room and had to relegate some of the sections to a drop-down listing. I’m sure they could have put RSS, CNNtoGo et al to a ‘utility’ bar and allowed those 5 sidelined sections to have presence in the main bar.
Not to mention it doesn’t work if you are running around with javascript turned off.
I think the resolution thing is less of an issue, strangely enough. 1024 is becoming a more defacto standard and as with other sites moving to this they have put the less important content on that side of the screen, so as those browsing with smaller viewports still get the page’s message.
However I do think that CNN should have maybe looked at a browser-width-dependent layout.
CNN’s web team probably have a terrible time juggling stakeholders, but if a site is unusable or difficult to navigate then everyone is going to lose out.
Why shouldn’t big sites have good IA and design? The BBC site always gets cited as a pillar of the web community by developers and the public. Sure it is difficult to do well, but the CNN nav bar now just feels thrown together, and I think is harder to read than their vertical version.
]]>Reading about his design process you can’t help but see definite parallels with web design. In particular is the concept of preliminary sketches done in the rough and in black and white - identical to the idea of wireframing in greyscale to prevent clients fixating on colours. One approach I found useful was that he charges for revisions to ensure revisions are meaningful and kept to a minimum. This is something I’ll be taking into my process - perhaps the first 2 revisions are free and subsequent ones are charged at a fixed rate.
]]>After following Ben Hammersley‘s posts about building this site I was interested to see how it panned out.
I do welcome the Guardian’s creation of an editorial site where we can read about opinions rather than just the dry facts. it is something which I think has been lacking from online news sites for a long time and I’m sure Comment Is Free will find some big names to contribute over the next few weeks. The addition of open comments to those posts is something more of a risk.
I can imagine the bosses at the Guardian being a little concerned about opening up a blog which will include posts on political subjects to the general public. Politics is one of those subjects which you are always warned away from at dinner parties in case it turns into an impromtu rehash of Bugsy Malone.
George Galloway's post has resulted in something of a heated exchange but just like the Newsvine entry on the same subject has one or two well thought out comments which add to the information provided by the main post.
I’m still unsure as to how much goodness these systems will give. You still have to wade through some rubbish to find the good comments on both sites. I think the most interesting thing they will contribute is the potential for providing an international viewpoint on a news event. When the BBC opens up certain stories with a “Have your Say” section it more often than not adds value. This may be because they are able to moderate the posts more fully and I think this is where the Guardian and Newsvine will need to concentrate their efforts.
]]>The reorganisation seems to focus attention more on what the team does, in particular the Task Forces. A welcome addtion is a list of who the WaSP members are and who is in which Task Force.
Also of note is that the site will be allowing comments on articles for the first time which along with the opening of their annual meeting at SxSW to the delegates points to a more receptive organisation. Now that can only be a good thing.
]]>I hope this is a non-issue. The guy says he won't enforce the patent himself but will sell it off to the highest bidder.
Am I the only one who thinks that it is ironic that someone who says he's been using this technology since the 1990s, still has a full-Flash site with non skip link?
He sounds like the most annoying bloke ever. "My mom saw me struggling, and one day said, 'Why don't you figure out a way to bottle up that Balthaser magic.'" Urgh. Just the fact that he could even think of patenting something like this earns him the disrespect of all the web developers out there and proves that he just doesn't get it.
I can't see how this will make it past the first legal test. It certainly doesn't appear that he was influential in the development of any of these technologies.
I'm off to patent electricity. I've been using it for 30 years now and I'm sure no-one else has.
]]>This is something I’ve been trying to drill into people at work. However Podcast is a buzz term and whilst inaccurately used, does tend to fire people’s imagination more.
]]>I always have mixed feelings about movies made out of Philip K. Dick books as they are not always well-executed. But I always live in hope that someone out there will do his books justice with a new movie.
However this one may be different. Scanner Darkly isn’t of one of his stories I’ve read, so have no preconceptions about how it should play out. Shot in a cross between comic book and live action it is definitely eye-catching.
Come on, give us a release date!
]]>Of course the sign of a great product must be when it gets bought before it’s even launched.
MeasureMap is a web analytics service created by Jeff Veen and others at Adaptive Path. Jeff is moving to Google with the product.
I’ve been waiting for MeasureMap to launch ever since I saw hints of the UI a while back and had been wondering why it was taking so long to hit the shelves, I’m guessing this had something to do with it.
The question everyone must be asking is, how is this going to align with the similarly shy Google Analytics.
]]>Molly Holzschlag and Andy Clarke gave a talk last night at the North East Usability and Accessibility Group at Northumbria University, entitled “Standards and the Design of Usable Sites”. Congrats to Tom and the University team for putting on a good event - with wine and nibbles too!
Molly opened the show with a run-down of the history of standards on the web. It’s amazing how far the web has come in such a short time and having hand-coded many a table-based page in the early days, I don’t think I could ever go back to coding that way - in fact I think it would take me quite a while to remember all the tricks we used to have to employ to get the pages to work in the browsers of the day. She did a great overview of modern web standards and the Layer Cake way of coding - structure, content, presentation, behaviour - though every time she used the term, the film kept jumping into my head.
An interesting thing to which Molly alluded was that even if the IE team got the browser up to spec. with all standards/recommendations, then they still have to ensure it plays nicely with Windows Vista and this requirement would overrule any standards in the case of conflict. Personally this is not something I’d even thought about having an impact on the development of IE. I could understand (to an extent) them not wanting to break all their big client’s sites, but this adds another dimension. I can’t imagine all the pressure the IE team are getting from all sides in the run-up to the launch of IE7.
Molly also mentioned the use of microformats and their potential for the future of the web. After hearing them talked about at the Carson Summit they are something I’ll be taking more notice of.
Andy’s talk focussed on the extent of user experience on the web. He illustrated the old way of doing things when trying to provide accessible content to as many people as possible - Betsie on the BBC and Amazon Access. His point was that these methods may provide the main content of the full version but provide none of the user experience which is available on the ‘full’ version.
He then ran through the design of elements of Karova store and how they employed ‘Tesco testing’ to get some eye-opening usability data - in this case the difference in how men and women process on-scren layouts.
Molly and Andy are great public speakers and a big thanks to them for making the effort to come to Newcastle.
]]>Jeremy and the ClearLeft crew did a great job putting the event on, from greeting the attendees in the hotel lobby to providing an ad hoc wireless network.
The morning was pimarily taken up with a review of Javascript syntax and an overview of DOM scipting and the Ajax proper started after lunch. The morning’s tutoring was definitely worth it though as Ajax relies on this to perform. Jeremy is a great speaker and managed to explain the ins and outs of Ajax in a way which cause a lightbulb to go on in my head at least a couple of times. As anyone who has seen him speak before or has read his book will know, he is a great advocate of unobtrusive scripting, using hooks in the code to tie in your javascript. He applies the same sort of graceful degredation to his Ajax coding.
His approach is to code the application so it works as a traditional page with the page refreshes that entails. Then add in the Ajax to ‘Hijax’ the page, diverting the call to the Ajax code if the client browser supports it. This relies on the server-side architecture being coded in a modular way. What this means is that you can target the server-side code for a particular module within a page. This may require some change of coding methodology, but it will mean you only have to code it once for both standard calls and Ajax calls.
]]>This was one of those movies where you seem to recognise many of the actors, but can't generally place them. A bit of research on IMDB sorted me out.
Ephraim, the Mossad hander was played by Geoffrey Rush, who for some reason kept reminding me of Sgt Bilko, and who played the ghostly Captain Barbossa in Pirates of the Caribbean.Then there was Eric Bana who also played Hector in Troy and coincidentally starred in Black Hawk Down which I watched on the way back from my London trip.
The movie was a pleasant surprise in that it balanced good dialogue with a fast-paced script that never lingered too long, coupled with lots of location changes to make a very visually appealing movie.
]]>The first talk of the day was a great one and set the tone for the rest of the day. Joshua gave us 45 minutes chock-full of useful information on how to get the most out your servers (from using a proxy to throttle performance, to server caching); building an API (the main point being make it easy for people to get in and out of it by using a simple solution such as XML and not requiring an API key); and URL-rewriting.
URL beautification became a common theme throughout the day as several of the talks urged for simple URLs. Joshua's reasoning was that they are the main marketing push for your site as they are copied and pasted everywhere.
Other points
Tagging was one of the most useful things Joshua discussed as it is something I'm looking at for a couple of projects. Interestingly del.icio.us was built out of a txt file based system Joshua had for bookmarking websites, which eventually distilled his descriptions down into one-word meaningful 'tags'. He urged people not to be tempted to auto-add tags for users as this removes the attention required to give meaningful tags - the transaction cost of having to add tags each time makes the tag system stronger (although don't overdo this cost). I'm glad he spoke against a forced vocab list for tags. His reasoning was that a tag will mean different things to different people and there is no way you can add a 'description' to each tag.
He said make sure you measure useage of features over a period of time - if they are still using the feature long after it launched then it is a success. When testing for usability, don't give people tasks or goals as it makes them act totally different to how they would normally - they start reading everything on the screen.
For marketing he said they didn't really do any - most of it was evangalising users and he points out that del.icio.us isn't a community site, the community exists outside on AIM and email and other communication methods. Del.icio.us doesn't own the community (along with all the pitfalls of hosting such a community).
]]>Cal looks at 10 things which he sees as being instrumental to Web 2.0 from the point of view of Flickr.
Allowing people to list friends on Flickr enables a social network to develop, making the site better for them and for you. In addition, people within the network can add metadata to each other's photos so giving added benefit to all.
Having multiple types of data for each item (date, tags, geolocation, 'interestingness') means you can display it in lots of different ways.
Start with a basic API (ie ready-only) and work up to a full-featured one. The advantage of APIs is that it allows others to build features for you that you have neither the resources or inclination to build yourself. In addition it forms a free marketing tool as people who use it will talk about it.
A selfish reason for building an API is that it makes getting at your data easier so reducing the number of people grabbing your data by more invasive methods such as page-scraping.
2 out of 2 for this one! Cal adds that once given a nice url, it should never be changed as it will have been bookmarked and linked to. Obvious but worth re-stating. He also mentions that as urls become cleaner more people will start to manipulate them, so Flickr for example shows the hierarchy of the information within the url. He goes on to say that it's worth making sure your urls will scale with the site.
Not the first time this will be looked at today, this enables load on the server to be reduced (something which was raised later was that you might experience increased server load as your users interact more with the site - but that's all good!).
It's all about internationalisation and localisation, baby! Cal points out the difference between the two (enabling storage vs output in differnet language).
Bringing the website out of the browser means your site has more reach and makes it easier for users to interact with it (dragging & dropping files is one task which is far easier to accomplish on the desktop). Build widgets to allow uploading of files, plugging into the API. Allow email to be a delivery tool for publishing content.
WAP really hasn't changed our lives, but many of the newer phones have browsers based on Opera. Cal's main point here is that it is a different type of user who is going to be using a mobile to view the site and so you should re-purpose your content to suit; swapping out the styles may no longer be enough.
This is important stuff for service sites - allow your users to feel that they can leave with all their data at any time (and that includes any metadata which they may have added while using your service). All this can be accmplished via APIs allowing 3rd party sevices to add more value.
Again important stuff. The data belongs to the user along with all the rights, unless specified by the user by Creative Commons or suchlike. The site cannot use or modiy the data without their permission.
]]>Every service can be built upon existing services and this makes them all the more powerful. This all incorporates the use of APIs of course which has the effect of embedding your service within the community. An interesting point was that the combination of GoogleMaps and GoogleAds could enable everyone's favourite search engine to target their advertising geographically. This ties into Tim O'Reilly's assertion that the race is one to 'own' certain types of data - eg location, identity, calendaring of public events. It is these services which will form the basis of the web as a platform.
He then went on to identify the 'Architectural Principles' of Web 2.0, citing Matt Biddulph's 'The Application of Weblike Design to Data'.
Having had a cursory look at Rails I can see where this talk is heading from the outset. It is centred around 'convention over configuration' and the basic premise seems to be that most of the work done by web programmers is mundane and is repeated again and again - 'You are not a snowflake'. Shouldn't you then concentrate on what makes you unique in coding terms and let the framework do all the heavy lifting?
To demonstrate this he shows us some code in Rails and proceeds to remove everything which confers some sort of pattern, from naming fields in tables to setting defaults. This leaves a much slimmed down code example. If we make the configuration the exception rather than the norm we can write slimmer, more beautiful code.
Flexibility is overrated - it leads to more complex systems which are harder to modify; constraints are liberating and lead to consistent systems and happier teams.
He then moves into Rails proper and demonstrates how it encourages good practice, for example by incorporating test pages into the cycle; "You do know testing is good, don't you?" Compare this to PHP (David developed in PHP for 5 years) where the devil on the other shoulder tends to get heard: "You can test it at the end", "No-one else needs to know how this algorithm works - it makes you a more valuable employee!".
]]>APIs obscure the storage format and the retreival process.
This was a great talk and made my top spot on the feedback. No-one ever talks about this kind of detail in the frank was that Ryan did.
You don't have to be big anymore. Web applications are much more acceptable to people than they were a few years ago. Combine this with the plummeting cost of hardware and availability of Open Source software and OS.
Ryan defines enterprise as mass market or 1,000+ users.
He says that the minimum cost for an enterprise web app is £30,000. you should make sure that the idea is financially viable ie that it is worth paying for. use your common sense - would you pay for it? be cautious about your projections - get a pessimistic guess and then cut by 45%. Are you still in business? Then go ahead, oh, and make sure you plann for profit from the start.
DropSend is a hardware intensive application so some of the following figures would need adjusting for other apps.
Item | Cost |
---|---|
Branding and UI design | Ryan Shelton Mutado.com £5,000 |
Development | Plum Digital Media £8,500 + equity |
Desktop Apps | £2,750 |
XHTML/CSS | £1,600 |
Hardware | Old Linux box for dev testing £500 |
Hosting and maintenance | 5 Servers from BitPusher £800 pcm |
Legal | £2,630 |
Accounting | £500 |
Linux Specialist | £500 |
Misc | £1,950 |
Trademark | £250 |
Merchant Account | Halifax £200 |
Payment Processor | Secure Trading £500 |
Total | £25,680 |
And that only includes one month of hosting.
To help with raising this capital Ryan ran a side business - Carson Workshops - but it still took a year to raise the necessary.
Go for quiet talent rather than rock-stars. Big names cost and are generally busy busy.
Offer a percentage of product equity (2-5%) which becomes bankable if the product is aquired.
Ask for recommendations - getting the wrong person can be disasterous. or you can always outsource - Ryan tried India but it didn't work out for him, largely due to the distances involved.
Buy just enough hardware to launch, but build your app so it easily scales - can you easily plug in disk space? Don't get tempted by lots of shiny new servers.
Plan for scalability but don't obsess about it.
Don't spend money unless you have to:
Before you spend anything more than £25, just check yourself and make sure you really need it.
Make deals. Build websites, give away equity, give advertising on your blog.
Use IM, no phone calls.
Do as much yourself as possible:
You will go 10% over budget and 3 months over schedule. Plan for it at the outset and put it in the cash flow. Are you still in business?
Make use of those free 1 hour consultations!
Company terms of service will cost £1,000; contracts for freelancers £800; privacy policy (from Clickdocs) £15.
DropSend was developed primarily with cheap/free software from start to finish.
Don't spend money! Use blogs and word of mouth. Look for viral delivery tools - make your app tell other people about your app (eg DropSend sends email notifications and includes info on itself). Write about your app for the trade magazine of your target audience - they will generally be happy to accept it.
You need a seriously good reason to give away some of your company to v.c.
]]>He cheers up a little towards the end saying javascript has been compared to Lisp with C syntax. I get the distinct impression that he was speaking from a hardcore programmer's point of view, but unfortunately this made a lot of what he was saying difficult to understand for many of the mere web programmers out there.
I'd love to see that 1-to-1 virtual reality model of the entire world.
]]>