Archives for posts with tag: software

In case you want to make a TextView clickable, simply setting an onClick handler won’t do it. Make sure you set your android:clickable attribute:


$ logout

I very recently had a conversation with a client about implementing email notifications for his online community built with my software. There was, of course, an urge in him to notify users every time the equivalent of a thread is created or responded to. There was also the need for an email to go out as soon as a personal message was received.

For the latter, I was on board; but for the former, I had to make a point– no matter how subjective. My point was that I will unsubscribe from any site’s mailing list the minute I get an email with things I don’t care about. Emails are on the level with text messages for me, arriving on my phone with an envelope notification, pining to pull me away from reality the minute it arrives. And it’s not that my time is money, but because I loathe spending time reading random minutia that I hate your everyday “engaging” emails (especially when they arrive every day).

Needless to say, I made the point that when designing a product and how it will send out notification emails, it’s important to remember:

  • You’re not the only site that any given user is a member of.
  • People have vastly varying degrees of tolerance for emails meant only for getting them to your site again.
  • Don’t automatically opt users in. Unless your software is going to take me out to dinner and learn my quirks first, don’t assume you know how I like my email notifications. Instead, give me the choice and demonstrate why your notifications will be useful to me.

On a related note, LunchTable just got email notifications today, unsent until you subscribe.

$ logout

Recently LunchTable received an influx of spam registrations: marked by similar-looking nonsensical usernames, “real names” that didn’t match their gender, and email addresses on adult sites. I didn’t want to use CAPTCHA to mitigate this, because I personally die a little inside at each CAPTCHA encounter, so I took another route.

With a little help from StackOverflow (can’t find the link now, will update), I found a clever solution that banks on spam scripts not executing Javascript. Less than 12 hours of being live captured and prevented two spam registrations. I start by adding a hidden input field to my registration form and filling it with default text:

<input type="hidden" id="robot" name="beep-beep-boop" value="iamspam" />

Next, I add a little Javascript to change this value to a numeric value, and subsequently increment it every second (this will come in handy later):

var botZapper = function() {
    if (document.getElementById("robot")) {
        a = document.getElementById("robot");
        if (isNaN(a.value) == true) {
            a.value = 0;
        } else {
            a.value = parseInt(a.value) + 1;
    setTimeout(botZapper, 1000);

Finally, on the server side, I do a simply check (in PHP):

$botTest = $_POST['beep-beep-boop'];
if ( $botTest == "iamspam" || !is_numeric($botTest) || $botTest < 10) {
    // This appears to be spam!
    header("Location: /");
// ...database INSERT code untouched by bots...

This checks if any of the conditions are true, indicating a bot:

  • First, if the value hasn’t changed; meaning the user didn’t have Javascript enabled.
  • If the submitted value is other textual information we didn’t expect.
  • If JS was enabled, but the form was submitted within 10 seconds.

If these tests fail, I mercilessly redirect the bot to the home page with no “fail” message! My scorn is certainly felt.

Your time threshold will definitely vary depending on the length of your form, and you will need to accommodate users without Javascript enabled at all. However, at the time of this sentence’s writing, I’ve captured four attempted spammings from China, all who never updated the hidden input field (failing at the first test).

$ logout

I read an article a few years ago about someone noticing a cyclical trend in computing: going from dumb-terminal-connected-to-mainframe to personal-computing and now back to cloud computing.

I’ve always loved the idea of creating a website, since I made my first one on an IBM ThinkPad running Windows 95 in 5th grade. It was both my foray into “programming” (just HTML, not much javascript) and sharing something I’ve created with the world. I’ve held on to that feeling through all the websites I’ve made since then; the feeling of looking at my own creation, that I could change at any time to my liking. Like a painting that never dries, and is on constant display in space— where anyone in the world can see it. This is quite the image of grandeur, but one that attracts me to what I do, despite missing the hubris to match up.

So when new services come about—ones that allow you to create something, especially—I’m instantly hesitant to trust them completely with my data. Back in “the day,” any Myspace designs would be best hosted on my own site, in case / when they change the design. Important Twitter-like statuses are best stored on my local machine. I can only use Dropbox knowing those files are actually saved on my computers, and not purely on their servers.

It’s this visceral feeling that makes me uneasy and mostly apprehensive towards the longevity of activities I partake in on the internet. Some creations aren’t as important as others: I won’t miss a Facebook status disappearing into the great /dev/null in the sky, but some products of using these platforms (a Facebook note, for example) are worth keeping, and it’d be nice to know they’re safely on one of my own hard drives.

In my own case, there’s LunchTable: users post statuses (long and short) and interact with each other, adding to a group’s conversation what is necessary or interesting. They have the ability to create something they otherwise can’t in offline software— but why shouldn’t they reap the benefits of both worlds? With LunchBox, the accompanying open source project for using exported LunchTable data, I’m hoping to encourage users to both create something meaningful and feel like they’re not dependent on a service to forever access their own data. It’s a trend I would love to see more of.

$ logout

If you’re going the route of a vertical background gradient on your site (using a tool like the Ultimate CSS Gradient Generator (seriously, it’s great)), you might notice it does in fact fill your entire body, but not the entire window like you’re used to with solid colors. Simple fix:

html {
    min-height: 100%;

Just that like that: beautiful browser-based gradients.

I’ve made a lot of from-scratch websites in the years Chrome has been around (and before that, Firefox). In the old days—around IE 7 or so—I remember hoping to support old versions of Internet Explorer. I would test thoroughly and spend hours on workarounds for the archaic browsers. But not anymore.

With Chrome and Firefox on a quick update schedule, it’s obvious that they are doing web browsers right: quick iterations of browsers that keep up with the web that they provide the lens to. Not faux-modern browsers.

So I no longer address older browsers in the websites I create with anything more than a message telling users to upgrade. One site I’ve made, LunchTable, will simply let you know your browser is out of date before you can log in if you’re using IE (made easy through conditional statements), since only IE 9 has some CSS3 features I use, and IE 10 finally supports WebSockets for chat. Other browsers get a lazier approach that works well: feature detection. Before a user can chat, I check if WebSockets are supported— no need for me to research and test exhaustively, or update the tests when new browser versions come out.

The company I work for has a web editor that utilizes a ton of browser APIs (like FileReader) to create a seamless “editor” that works entirely on the client-side. While Opera happens to break for mysterious reasons and IE apparently doesn’t know every CSS2 or 3 selector, every other browser can check for the needed functionality and alert the user about certain functions that just won’t work with their current browser. This allows, for example, Safari 5.1 users to still log in and use most features of the application, but when they can’t upload images, we expect they won’t be surprised.

This approach puts more of a burden on the end user, which my old CS professor used to argue vehemently against (that the developer should be trimming input data and formatting it correctly, for example). But it is only the burden of keeping your software up to date— a responsibility of any computer user. And in the case of IE, this approach encourages users to exercise their freedom of choice in the web browser market. Truly, a win for everyone.