Archive for the ‘Internet’ Category

About the internet, and advice to survivalists

September 1st, 2010 4 comments

A couple random thoughts, on topics as varied as Net Neutrality and how to survive the apocalypse in style.

First, and I’m not sure how to turn it into a slogan that would fit on a T-shirt, but I thought of a great tactic for the pro-Net Neutrality folks. Basically explain that without net neutrality, your internet experience will become what your cable TV experience is today – numerous confusing tiers of service, lots of pay-per-view, crappy mandated equipment, and lots of added fees for everything.

The internet won’t matter once the apocalypse occurs. Assuming you want to survive in style (I myself will probably end up on a spit with an apple in my mouth!) you might decide to stockpile gold, guns, and so forth. I have a better idea – stockpile something that is portable, would soon be worth more then its weight in gold for trade, and can survive almost indefinitely if stored correctly. I’m talking about spices of course. The contents of your local Whole Foods spice aisle will enable you to live like a king, especially after a few years when everyone is getting sick of plain roast squirrel and the nearest cinnamon tree may as well be on the moon.

Categories: Food, Internet Tags:

Call 1-976-GOOGLE for a good time

August 31st, 2010 1 comment

There are several sites that many people really depend on, such as Facebook and the various Google services (Gmail, Docs, and so forth), and perhaps even Twitter. I’d include Yahoo in there as well, at least for some people. You could go further and add some of the big blogging services and media/photo sharing services to the list as well.

By “depend on,” I really mean depend on, both for business and personal use. You might be a heavy Facebook user who depends on the site for interacting with family and friends. You might have hundreds of hours invested in a photo-sharing site, or depend on Gmail to run your business. We can all think of very critical ways most of us depend on one or more of these type of cloud-based services.

While having a backup is better then nothing, it doesn’t make up for the serious interruption that could happen if you lose access to your account on one of these services, either through foul play or an unresolved technical glitch.

All of these services, including Google and Facebook, do not have any way for you to contact technical support. There’s no one to help you. Your only choice is generally to browse forums or send email in the vain hope that someone might look at it. The response times are awful. I know someone who was locked out of their Facebook account for weeks before the issue was resolved. Stories of people being locked out of their Google accounts are sadly common as well.

I can understand why sites like Google and Facebook do not provide free phone support – the cost would be overwhelming. However, I have often wondered why none of these companies have offered paid, premium support. I imagine if Google or Facebook offered “Red Carpet” tech support at say $50 per incident, it would end up being quite profitable, because most of the things people call about can probably be resolved somewhat quickly (I’m thinking account lockouts or very specific technical glitches). We’re not talking “Geek Squad” PC support questions here.

From the customer perspective, it would be a winner too. $50 might seem like a lot, but if your business depends on Gmail and you need to get an account issue resolved in an hour as opposed to a week, $50 (or even $100) would be cheap. You might be pissed that you had to pay, but at the same time relived that it will get taken care of and you have a real human, who has the power to fix and escalate things, at hand (needless to say, I would make the premium support domestically-based).

Setting the price high (or perhaps charging per minute) would discourage calls from “casual” users and would be self-limiting as to the types of people and problems that would be handled. It would prevent calls from people anxious because their Facebook account has glitched for a couple hours. Still, there are a few issues to consider. If the issue ends up being something that is truly the fault of the service provider, a (partial) refund might be appropriate. Authentication issues might also have to be handled properly, especially for account access queries.

Still, I think this is an idea whose time has come. What do you think?

Categories: Internet Tags:

Only you can prevent bandwidth theft

August 17th, 2010 Comments off

This blog (and the other blogs and domains on my master account) are not very popular (in spite of the general awesomeness which pervades every pixel). Our monthly bandwidth is a couple gigabytes at best, which is why I was very surprised yesterday morning when I got an automated letter from my hosting provider telling me I was on a path to blow through my monthly allotment of 150 gigs of bandwidth and be liable for a big overage charge!

The culprit was one of those slimy, scammy “you won’t believe what this video showed the babysitter did when the parents were away” sites. They were direct-linking to the original of a tiny 25 k png image that Dave uses for his site (and he has copyright of the image, adding insult to injury!) Downloaded, oh, a few million times, that adds up.

There’s a few ways to deal with this. One obvious and fun way would be to simply replace the original image with one that perhaps contained a double bird and insulted the thief’s mother, but, as satisfying as that would have been, it still would take my bandwidth. Another option would be to simply rename the image, breaking their IMG SRC tag, but while this would stop this specific thievery, it wouldn’t stop them (or anyone else) from figuring out the new image name and using it instead.

I needed a way to stop all external referrer image linking to my account, but still allow images to be referred when the page was locally hosted (i.e. part of my blog).
In other words, this will not allow someone to use your image as part of their site, directly from your server (normal hyperlinks to your site work the same as always).

After a fast and intense Google-powered brain-bang, I had found the answer!

The way to do this is via an .htaccess file that utilizes a built-in feature of the Apache web server called mod_rewrite.

You create a file called “.htaccess” at the very top level of the web site you want to protect (or append the the existing one if it is already there), and put the following text in it:

RewriteEngine on
RewriteCond %{HTTP_REFERER} .
RewriteCond %{HTTP_REFERER} !^http://([^.]+\.)*yourdomain\.com/ [NC]
RewriteCond %{HTTP_REFERER} !^http://yourblogspotblog\.blogspot\.com/ [NC]
RewriteRule \.(jpg|gif|png|bmp|mp4|avi|mp3)$ – [F]

Replace “yourdomain” with the actual name of your domain (and obviously replace ‘com’ with ‘org’ or whatever if it is a .org site. The * is a wildcard, covering prefixes like “www” or whatever, as well as the naked, raw URL.

You can have as many lines as you have domains you wish to allow linking from. In other words, this is a whitelist of allowed domains, generally ones you own or post to. I’ve included a blogspot blog here too, for example, if you have a blogspot blog from which you link images you host on your main domain.

Make sure to keep the backslashes, carets and other goop intact, they are used as part of the regular expression.

The last line lists the file extensions that you are not permitting to be externally linked. In my case, I want to prevent links to common graphic, music and movie formats.

Save your .htaccess file and you should be good to go – it should take effect immediately.

Now, you will want to test your changes.

You will need access to a “non-allowed” domain. If you have a friend with a web site, ask to use it, or you can always use or something. To test, just create some HTML code that directly links to a file on your protected site, a normal IMG SRC or whatnot.

Save it, and clear your local browser cache – this step is very important, because if the image is in the cache somewhere, it will still be displayed even if the .htaccess file is working great. Then load the test page. You should see broken image indicators for the images.

If not, make sure again to clear your browser cache (or try on another machine), and check the .htaccess file to make sure the code is correct and it has proper permissions (644 – world readable, but only writeable by the owner).

Lastly, don’t forget to verify that images do show up properly from within your own site. If you made a typo in your domain name when editing the .htaccess file, this would be the result, so double-check with all the “whitelisted” domains.

Categories: Internet, Software testing Tags:

Paper, please

July 20th, 2010 Comments off

Amazon just announced that it is now selling more books in Kindle format then the traditional hardcover.
Kindle photo

Of course, the Kindle’s not the only game in town for eReaders. Borders has one, as does Barnes and Noble and of course, Apple.

For anyone who knows my relative addiction to technology (motto: if it is good, it can be made better by adding electricity. If it has electricity, it can be made better by turning it into a robot. If it is a robot, it can be made better by giving it sentience) might be surprised ot hear that I use none of these devices. I prefer paper books. The one that are made from dead trees, that take up massive amounts of space, weigh a lot, and give you paper cuts. They also have a disturbing tendency to burn when there is a fire (of course, a Kindle will melt like butter, but all your books are backed up digital files you can load on a new device, right?)

But, before you hand me a Metamucil and a box of Depends, allow me to share with you why I (a pretty voracious reader) still prefer paper books.

These first four reasons are positive, related to what I feel are real advantages of traditional books, on a personal level

I like the tactile feel
I like the physical weight of the book in my hand, and the physical representation of progress as I turn pages and see myself move through the text. Something about a real book makes it easier to get “lost” and really connect with the author.

I enjoy the collecting
I have way too many books, filling my shelves and overspilling everywhere. These books are a physical manifestation of my intellectual growth, my education, my knowledge. They are portals to amazing fictional worlds that I have visited, or keys to great minds or museums. I like seeing them on my shelves, occasionally paging back through them. Any serious reader with a large library knows the feeling of recalling a favored passage or story that can be triggered just by seeing the title of a book on a shelf.

Real books can be shared
If a friend wants to borrow one of my books, he or she can do so. Likewise, I can borrow from friends. You can’t do this with e-books, which are locked to your own device via software incompatibilities and digital rights management. Obviously, real books can be resold as well. You will never browse an electronic used bookstore, nor pass along a treasured e-volume to a spouse or your kids.

Real books will last forever
You can read books that are hundreds of years old. Acid-free paper, stored halfway decently, will last centuries. Do you think any of the e-books you buy today will still be viewable in even a decade, much less a quarter century or more?

The next five reasons are related to technical limitations of e-books. All of these limitations might be overcome, at least somewhat, as the technology continues to mature.

Print is still higher quality
The iPhone 4 has the best screen of any electronic device on the market, and it is basically approaching print in quality. An iPhone 4 display on an iPad-sized device would be amazing, and it is certainly coming, but it isn’t there yet. And even this amazing screen is hard on the eyes if you stare at it for an hour. Print, especially high-quality print, beats any e-reader on the market, at least for now.

E-books are too expensive
E-books used to be ten bucks, now they are fifteen. Add to that the myriad of weird pricing decisions, absurd “on-sale” dates and geographic and time-based market restrictions on various books, and you can start to get an idea that publishers are their own worst enemy in making sure the things which should be the natural advantage of e-books (any book in print, cheap, now) are not taken advantage of. If a new hardcover comes out, I’d rather pay Amazon 18 bucks (which is discounted) for a physical edition then 15 for an electronic version. No contest. But, if the price were 8 bucks, I would think. Hard. I like paper books, but I also like saving money.

There’s less choice of books
Between the new and used marketplace on Amazon and similar sites, I can buy pretty much any book which has been in print the past several decades, as well as books from countries around the world which may not be available in the US. Due to licensing restrictions, only a small fraction of books are available electronically. E-books are like Redbox movies – fine if all you want are the big hits, but no depth.

incompatible formats, devices, and DRM
DRM means I can’t lend my books to friends or sell them when I am done. Incompatible devices mean that if I develop a collection on my Kindle, and later Amazon decides to stop supporting it (unlikely, but stranger things have happened) – or I decide I like iBooks on the iPad better, I can’t transfer my collection (although I could use the iPad Kindle reader). There’s a ton of incompatible formats in the marketplace, and even the “standards” are not fully supported on all the various platforms, especially when DRM is involved.

ebook readers suffer from the limitations of being electronics
They break, run out of juice, don’t like heat or moisture, suffer glitches and crashes, and can’t be tossed around. A book can really be abused and still be readable (not that I would intentionally abuse my own books, but it is common to buy used books that have been less-then-gently used) but I really don’t want to abuse my iPad.

So, do I ever see myself switching to e-books? They do have some amazing advantages, most obvious the immediacy (start reading right away!) as well as the near infinite storage (not to mention searchability). I’ve already read a few short stories electronically, and it is not a bad experience. I am likely to use iBooks on my iPhone or iPad when travelling, or other times when the convenience of carrying a bunch of books in a small device I already am carrying really becomes important. Otherwise, I’ll take paper, please.

PS: Do you publishers, do you want to win me over – or at least temp me – into the ebook world? How is this for an idea. Similar to how most Blu Ray discs come with a digital copy included at no (obvious) extra cost, how about including an electronic copy of a book when I buy the hardcover?

Categories: Internet Tags:

I need backup!

May 12th, 2010 3 comments

I have invested a lot of time and energy into Twitter and Facebook. You probably have as well. Think about it. On Twitter, you have the (probably carefully managed) list of people you follow, as well as your favorite Tweets, followers and other leavings. On Facebook, you have your friends lists, your profile, and of course all of your notes and Wall postings…not to mention photos.
As a computer user, you always back up your important files (right?) With services like Twitter and Facebook, it’s not so easy, since all the data and settings live in “the cloud.” Imagine if a glitch on Twitter or Facebook destroyed your account, or even worse, if your account were compromised and actively vandalized. In a worse-case scenario, as with any data loss, you would have to manually reconstruct what was there before.

Luckily, there are some pretty easy ways you can backup your online presence. I decided to backup my three most important cloud services, Facebook, Twitter, and Gmail.

For Facebook, I tried two free services. Both of these services act as Facebook API applications, so you have to give them permission in your Facebook account, the same way you do when you use, say Farmville. The first app is called Give Me My Data and it is barebones and geeky. It issues commands to the Facebook API that return various data objects such as your Wall history, profile, friend lists, groups and so on. The data gets returned in a variety of formats such as raw text, CSV, or XML (your choice), which you can then copy and paste into a local document.
Give Me My Data is pretty thorough, but the data’s raw format isn’t for everyone.

A slightly more user friendly but less thorough app is Disco Explorer which sucks down the entire history of your Wall — all the links and bon mots you’ve shared over the years, plus friends’ comments and so forth. Disco Explorer uses the local database functionality of modern browsers (like Safari) to save the entire Wall as a locally cached web page, which you can then return to later. It automatically updates with your latest wall posts too. You can save the page as a webarchive in Safari and back that up also.

Between Disco Explorer and Give Me My Data, I am pretty comfortable that most of the content of my Facebook account is backed up. However, neither of these apps backs up photos. I am embarrassed to say that I backed my photos up manually, which actually only took about 15 minutes of clicking an dragging. There is a commercial app that does this (see below) but I didn’t want to spend any money.

On to Twitter. Here the choice is easy. There are several free webapps that backup the exact same data, which is made available via the Twitter API. You basically can back up your timeline (all your Tweets), the list of your favorite Tweets, your friends, followers, and direct messages. The two services I used were TweetScan Twitter Backup which will backup Twitter for you and send you a link to download an Excel spreadsheet with all the data, and Tweetake which does the exact same thing, but gives you the option of a CSV file. Both of these apps authenticate to Twitter using the normal Twitter OAuth API.

Finally, Gmail. You can always be lazy and download from Gmail using POP (assuming you set your account up to support it) but I went with Gmail Backup which has a free command-line app for the Mac, and a GUI for Windows to backup all the Gmail messages, as well as the Gmail labels for each message and any attachments. It took about half an hour to suck down the couple gigs I have on Gmail.

I should also note two solutions I did not use but you might find useful. First, there’s the commercial service, BackUpMy.Net which offers one-stop backup solutions for Gmail, Twitter, online photos, and blogs (it doesn’t do Facebook). Filling that hole is SocialSafe which offers comprehensive Facebook backups, including photos. So if you want to spend a bit of money, you can combine those two services for a truly comprehensive backup. I didn’t go this route, preferring the slightly more complicated but free services.

By the way, you might wonder, how to I back up the clous service that is this blog? SSH into my hosting provider and backup the mySQL database behind the blog, then download the database dump to my Mac (where it gets integrated into my local backup). The static pages I download manually and backup the same way. Both of these could easily be automated.

However you do it, you should back up your online presence. I’ve never met anyone sorry they didn’t back up enough stuff, but I have heard the opposite many times.

Categories: Internet Tags:


May 10th, 2010 1 comment

I think Facebook is kind of like Microsoft Word. When it started, it was really cool, and really useful. Lots of people really liked it and became enthusiastic fans.

With each revision, the people in charge of Microsoft Word added new features. Some new features people liked, some they hated, but in generally many of the new features were useful and made it a better program.

Eventually, diminishing returns were reached. At this point, Word pretty much did everything people wanted in a word processing application, but the pressure was there to keep making it “better” and adding more stuff.

Feature creep became a serious problem. New revisions started piling on useless “features” that people would never use, becoming more and more confusing. Even worse, old features people relied on were changed for the sake of change.

Eventually, Word became the bloated application it is today. Of course, most people still use Word, because it is the de facto standard; however, there aren’t too many people who find Word to be a pleasure to use, like in the old days.

I think Facebook is going down the same path. Each new “revision” makes Facebook less and less useful and fun to use. Furthermore, the continuous changes in how popular, well-liked features work make the entire site harder to use. Many of Facebook’s most recent changes seem to fall into the “change for the sake of change” category. Some of the changes actually take away existing functionality, and have semi-intentionally made the essential privacy settings a confusing mess.

I consider myself somewhat knowledgeable about “figuring stuff out” on computers and the internet, and I think I’ve managed to get my privacy settings at a level I am conformable with. However, it took me several hours of testing to get to this point, and even now I still cannot figure out how to enable what should be simple things like getting Facebook to display to my friends the fact that I graduated from the University of Kansas.

I am not going to “quit” Facebook. Like Word, in spite of the cruft and crap, enough of the underlying functionality (casual communication with old and new friends) remains instant that I still find the site useful. But I remain disappointed that Facebook is worse now then it used to be, and likely will continue to get worse in the future.

Categories: Internet Tags: