This post is in Dutch as it concerns the upcoming Dutch parliamentary elections.
Ik heb nul komma nul vertrouwen meer in de gevestigde orde, en het programma van de Piratenpartij strookt redelijk met mijn idealen. Daarbij kan het geen kwaad dat er eens mensen in Den Haag komen die daadwerkelijk snappen wat dat gekke Internet-ding nou eigenlijk is. Je weet wel, dat fenomeen waar de moderne maatschappij op draait… Al halen ze maar één zetel, een beetje know-how in de Kamer is echt geen gek idee.
Dus, als je gaat stemmen en je niet bij voorbaat themapartijen uitsluit, neem dan op z’n minst eens het programma van de Piratenpartij door. Ze hebben zelfs een samenvatting in stripvorm als lezen niet je ding is.
There’s been somefuss in the tech community about Twitter’s latest announcement to developers. It seems that Twitter would prefer to see all third party clients just go away, leading some people to get a bit nervous. Others seem more confident in their business, but it just goes to show (again) what a bad idea it actually is to have a single company in control of what has become a rather important service on the Internet. Facebook is another example of the same bad idea, by the way, but I won’t even go there.
So a while ago, the folks at App.net had the idea to create an open Twitter-like API where people would pay for access to the service, instead of being profitable by selling the users to advertisers. While the initiative isn’t bad — in fact, I backed the project pretty early on (I’m @helvensteijn) — I’m still not sure it is right way to go. How many people would really be willing to spend $50 a year for it? Besides, the service itself is still closed source, not to mention centralised.
Wouldn’t it be better to have a Twitter-like API that is both open source and decentralised? Kind of like how RSS works, but more real-time (and with JSON instead of XML). The way I see it, there could be a standard API (like an RFC detailing exactly what endpoints should exist and what they should do and how they should do it). Like App.net, this API would mimic Twitter’s basic functionality at the start, which could eventually be expanded with new features.
It would be like RSS in that everyone publishes their own stream of updates, on their own servers. Clients can then subscribe to these individual streams, et voilà, Twitter-like following is achieved. To post an update, a client simply pushes it to the proper endpoint on the user’s server. For people that don’t have a server, or just don’t want to set things up themselves, 3rd parties could step in, sort of what Feedburner does for RSS already. The point is that everyone would (or at least could) be in control of their own data, and no one would be in control of the service as a whole.
I’m not saying that I would be the one to build or even design such a thing, because there are a lot of people out there who’d be far better qualified. I mean, I struggle to even program something as mundane as a blogging engine. I’m just throwing up a ball here, for anyone to catch.
Update (August 23, 2012)
Obviously, someone was thinking along the same lines as I was, because a day after I posted this, tent.io was introduced. It does pretty much exactly what I proposed, and a whole lot more. At least, it will soon. Definitely worth checking out.
A ridiculous new cookie law went into effect in the Netherlands last June that requires web sites to ask for a user’s permission to store any “non-essential” cookies. “Essential” cookies would be cookies keeping you logged in on a web site, or cookies storing items your virtual cart while shopping online. “Non-essential” cookies, on the other hand, are tracking cookies, basically.
Now, whether or not tracking cookies are essential depends on the person you ask. I’d bet advertisers think so. But that’s not really the point. This law is stupid because it doesn’t specify what exactly constitutes a user’s permission. Can it be implied or does it have to be explicit? Nobody seems to know.
One funny aspect of it all is that the web sites of most of the political parties that pushed the law, didn’t actually comply with their own law until the Dutch telecommunications watchdog OPTA announced it would start enforcing it recently. Now they do, all of a sudden.
But it now boils down to this: since web site owners don’t know what exactly constitutes permission and OPTA has announced it would be enforcing the law, many Dutch sites have taken the safest possible action, and are now intrusively asking for users’ permission to store their tracking cookies. Some even use pop-ups for this. So now the Dutch Internet user is faced with banner after banner, pop-up after pop-up, asking if it’s okay to store cookies on their machine.
And once the user has decided whether or not they want to allow cookies? That choice is stored in… lo and behold, a cookie. I presume that cookie is considered to be “essential.”
Well, I’m not going to ask for users’ permission to store cookies. I’ve taken a different approach. As of now, this web sites doesn’t store any cookies anymore, at all.* I’ve simply removed everything (essentially my latest tweet under the menu and Google Analytics) that stores third party cookies, and this site (at least since its last overhaul) has never stored any cookies of its own.
I don’t see this as a permanent solution, but chances are this new cookie law isn’t so permanent either. Our government has already been reprimanded by Brussels because the law goes much further than the EU directive that it is supposed to implement. We’ll see how this ridiculousness plays out.
* Unless you try to log in, but that would be “essential” cookies and you have no reason to do so in the first place.
I was vehemently opposed to SOPA, PIPA, ACTA, DMCA and all those nasty four letter acronyms, but something made me realize that it’s all pointless. Let them implement all the rotten legislation they want. The Internet was designed to be immune to nuclear warfare, it sure as hell will survive some politically misguided attempts to curtail it.
The more governments try to regulate the Internet in their own (or rather, their lobbyists’) views, the more inventive the Internet community will get to circumvent said regulation. It has happened before, and it will happen again. If the people thinking this shit up have learned nothing from Napster and Kazaa and BitTorrent and Usenet, I’m not worried. In the long run, they’re just shooting themselves in their feet.
A few days ago, the White House issued a statement in which they oppose SOPA and PIPA as they stand now, but also call upon the Internet community to “bring enthusiasm and know-how to [the] important challenge” of combatting online piracy. They just don’t get it.
Online piracy isn’t the problem, it’s just a symptom of an industry that outright refuses to acknowledge the fact that a large part of its business is no longer relevant. You see, the Internet has reduced the cost of distribution to the cost of bandwidth. There just isn’t as much dough in that as, say, treating your customers like shit and charging them big money for the privilege. Maybe they ought to start innovating, after all. To quote Nat Torkington:
We gave you the Web. We gave you MP3 and MP4. We gave you e-commerce, micropayments, PayPal, Netflix, iTunes, Amazon, the iPad, the iPhone, the laptop, 3G, wifi–hell, you can even get online while you’re on an AIRPLANE. What the hell more do you want from us?
The ball has been in the entertainment industry’s court for years now, and still they refuse to pick it up. It is 2012 for goodness’ sake. If you don’t let people pay for what they want, they’ll get it for free. And yes, you can compete with free, it’s called service. There’s massive value in that, and people are willing to pay. How many more examples is it going to take?
But hey, if these dinosaurs want to have it their way, why not let them? In the end, we’ll just decentralize and encrypt everything. Eventually, they’ll come begging for our money, and they’ll do anything to make us give it to them. We’ll see who has the longest breath…
A long, long time ago, probably somewhere in East Africa, something interesting happened. Something that had never happened before, and has never happened again. It first occured to early humans that it would be advantageous for them to actually know things. Not just remembering previous experiences, but posing a question, finding the answer and remembering that. It is this concept of knowledge, and the ability to expand and share it by asking and answering questions, that I believe sets humans apart from all other animals, even those considered to be highly intelligent.
Numerous different animals have been trained to anwer our questions in equally numerous ways. With moderate success, I might add. Some of these animals have learned to answer fairly complex questions. But as far as I know, none of them have ever taken the next logical step and started asking us — or themselves, for that matter — questions. And why would they? Without a concept of knowledge, what benefit would it serve?
Domesticated animals definitely know how to ask for things, or at least make their whishes apparent to us. For instance, we have two dogs who have no problem communicating when they need to go, when they’re hungry, or just want to play. Especially in the last case, one of our dogs really appears to be saying “please come and play with me”, begging for attention while bringing some toy and being very excited and all. But nothing that would even hint at the posssibility that they want to know something.
All this is just my brain’s tendency to go all philosophical when I’m feeling sick and am otherwise incapable of being productive. It simply hit me that, while largely taken for granted, our understanding of knowledge and our ability to expand on it and share it, is actually quite profound. Just a sentiment I thought was worth sharing.
First of all, I want to make it clear that, while I have done a bit of research, I am by no means an expert on this matter, I just find it very interesting. This article is purely about my personal experience with 3D displays, and what I think is wrong with them, based on that experience.
Having said that, let’s clarify what I mean by “3D displays”. I’m referring to the stereoscopic kind, these flat displays projecting a different image at each eye. For this article, it’s not relevant how they accomplish that. What’s relevant is that they rely on stereopsis, meaning the disparity between what each eye sees, which gives us a cue of relative depth.
So, what exactly do I think is wrong with them? It boils down to two things: convergence and motion parallax. Both are fixed, which your brain doesn’t expect. There are other depth cues we see in everyday life that are still missing from 3D displays, but I think these two are most important.
To start off with convergence. This is the fact that when both eyes focus on the same object, they turn towards each other. The amount of convergence depends on the distance to the object. The closer the object, the more the eyes will turn inward. With a stereoscopic display, there really is only one distance, which is the distance to the display itself. To see things sharply, both eyes must focus, and thus converge, on the display. This fixed convergence conflicts with the stereoscopic depth cue that the display provides.
Then, there’s motion parallax. When you look at an object and shift your head, you should see the object from a different perspective. In stereoscopic displays, depending on the technology used, when you shift your head either nothing happens — again conflicting with the depth illusion the display is trying to evoke — or the 3D effect falls apart completely, which is even worse.
The parallax problem has, in fact, already been solved, at least with generated imagery (i.e. video games, computer animations). I’ve seen demonstrations of 2D displays creating an illusion of depth, by changing the perspective using head tracking. Such a system could relatively easily be adapted for 3D displays, at least those using special glasses. Auto-stereoscopic displays (those without special glasses) generally have a parallax barrier or lenticular lenses fixed to the display, so dynamically shifting perspective seems somewhat more complicated there, unless the parallax barrier or lenticular lenses themselves could be shifted.
In 3D films, motion parallax would probably require more than two cameras, or maybe two widely spaced cameras with a computer interpolating the in-between perspectives. In either case (3D films or generated imagery), two fixed perspectives (one for each eye) is one thing, providing a whole range of perspectives might not even be feasible for most applications.
As for convergence, I guess it would be a lot more difficult, if not impossible, to solve. The way I see it, as long as the display itself is flat, convergence will always remain a problem.
Then, there’s a third problem which I’ve observed on my Nintendo 3DS. No matter if an object is in the foreground or in the background or in between, everything is sharp. It’s like it has an infinite depth of field. While the stereoscopic effect in and of itself is quite convincing, it makes it feel somewhat unnatural to me. 3D movies are better in that respect. The foreground is in focus with the background blurry, or vice versa. Perhaps there are games that are more like 3D movies in that regard, but both the OS and the games I’ve played so far exhibit this “infinite depth of field” effect.
In any case, while personally, I don’t have much problems looking at 3D displays, I can definitely see how they can cause headaches for some. They give conflicting depth cues, and as with every other conflicting situation, some people cope better than others.
I’ve been using OS X Lion for a few days now, and apart from the obvious new features that were advertised by Apple or that are mentioned in every review, there’s a plethora of little things that are new or have changed. I want to highlight a few of those here, in no particular order.
The grey Apple logo now remains on screen during the entire boot sequence and also got a subtle emboss to it. Also, no more blue while booting.
It usually takes my iMac a few seconds after I hit Cmd+Opt+Eject to fall asleep. That hasn’t changed, but what has changed is that it now immediately turns off the display. The extra visual feedback it provides is nice.
The grey top area of windows now has a very subtle grainy texture to it.
The bottom corners of windows are now rounded.
Quick Look now also works on stacks in the Dock. Simply hover over any item and hit space.
Speaking of Quick Look, it now uses a light grey window instead of the HUD display. Also, click and hold or right-click the Open with [Application] button to open the file with a different application.
Right-click a file → Open with → App Store... to search the App Store for compatible applications.
The huge 512×512 pixels icons introduced in OS X Leopard are apparently still not huge enough. Some apps, including Launchpad and App Store, now have icons measuring 1024×1024 pixels.
Almost everything is now 64 bits, including the kernel and even iTunes.
The Special Characters applet got a face lift.
The dictionary popup also received a face lift.
DigitalColor Meter no longer displays hexadecimal values, which kinda sucks.
No more Front Row.
Safari’s content area can be resized horizontally even in full screen mode.
Safari now has WebGL support and a “Do Not Track” feature, but they’re apparently still in beta. Both options are located in the Develop menu and off by default.
There’s probably much, much more I haven’t discovered yet. Apple seems to have touched everything in this release.