An imaginary New Orleans street
7 hours ago
A blog about platform wars : theory and practice.
So what does this mean for search engine competition and Google? Well, I think increasing a search engine's relevance to become competitive with Google's is a good goal but it is a route that seems guaranteed to make you the Pepsi to their Coke or the Burger King to their McDonalds. What you really need is to change the rules of the game, the way the Apple iPod did.
Hmm. I wonder if another way of thinking of this is a “world of ends” type argument with the data-format playing the role of the network-protocol and programs playing the role of the “ends”.
The SemWeb argues for smart protocols (ie. semantics in the file-format) whereas the SynWeb argues that semantics belongs at the edges : in the programs that produce and consume them.
The OPML format doesn’t know or care whether it’s carrying a playlist, a blogroll, a blog, a subscription-list etc. It just concentrates on allowing the data to be moved from one program to another. Meanwhile, the OPML editor is continuously being upgraded to know how to get more “meaning” from the outlines it creates.
Why is it “better” for the meaning to reside at the edge?
a) edge points can be upgraded individualistically. If I want OPML to represent my attention data I just have to upgrade to the new version of FeedDemon which will support this. If I want to upgrade to a new version of a specially crafted Attention RDF I have to get the whole network to buy-in.[1]
b) The corollary of that is that if you’re crafting a model like Attention RDF, it’s really important to get it right. Because upgrading is such a pain. On the other hand, it’s far less important to get attention in OPML perfect from the start. It can be continuously tweaked with new releases of the software.
This again reflects on the question of “applications”. OPML as a format is worse than Xoxo or some other XML. But it’s gonna be the winner here as long as Dave Winer can keep up the development momentum in the OPML editor and the rivals don’t offer a compelling alternative.
Les Orchard writes a great concept demonstrator but then abandons it. [Danny] does a lot of coding. But because [he's] focused on putting the semantics into the protocol, [he does’t treat his] programs as the primary vehicle for getting his meaning into the world.
If I want to use Attention RDF as a user, what do I do? Wait around until people have thrashed out the spec on the wiki and then install a very generic RDF database and query language? Why wouldn’t I rather follow regular updates of an existing, already useful tool, which is promising to add this functionality in small sips over time?
[1] I recognise the usual respost : about atomicity of SemWeb tags which means it isn’t an all-or-nothing thing. It’s just that I don’t see that this actually helps here.
For example, what would be an incremental roadmap for geting to widespread Attention RDF adoption?
We can’t say “well, we’ll roll out the att:readtimes tag to begin with, and then once people have started using that and seen the benefits, we can add att:lastread.”
But that’s exactly what a program-oriented SynWeb strategy can do. “We’ll start with ‘rank’ in the next version of the code, and then go from there.
There should be plenty of data points, but thinking about it they’re rather hard to pin down.
It’s hard to predict e.g. whether Atom will totally supercede RSS 2.0 or fall by the wayside itself
[Google] is leaning a lot closer to Paul Ford’s prediction than some people might suppose.
Now that more of the cards are on the table, we can begin to compare two fascinatingly different approaches to building out the data web
...
we have both Google and Microsoft flying the banner of simplicity -- a word that can mean different things in different contexts.
Microsoft and Google are being maneuvered into a massive game of chicken. I'll show everyone my Office data if you'll show your search data, and Dave is instigating it.
... we are in a ping server war. It's a little hard to see, but the ping server will become the new center of the net. Verisign's acquisition of Weblogs.com was the first salvo. I'm not sure Robert Cringley is right about Google-Mart, but he isn't entirely wrong. Google Base isn't just about volunteered structuring of data, but pushing pings ... The important point is there is tremendous value being the first to have information pass through your central node.
Most everything else I find very disappointing, on many levels. I really don’t know where to start.
Ok, start with the use of fairly arbitrary strings as identifiers. The Web has a well-defined system for identifiers, the URI. They’ve also got dates in RFC 822 format here - when did these folks last check any of the standard specs? They’re using RSS 2.0 and OPML as container formats. Marvellous choice, they’re inherently uinteroperable because they don’t have their own namespaces. Party like it’s 1999.
I suppose what really irritates me most here is that they’ve also egregiously ignored the recent progress on syndication data modelling/exchange protocol around Atom. I know you shouldn’t put down to malice what you can explain with ignorance, but I can only imagine this is politically motivated. Microsoft are less likely to get community resistance to “embracing and extending” Dave Winer’s Own Syndication Stack than something community based.
...
PS. Mr.Winer, as you might expect, is gushing.
That's $300 million to essentially co-opt the Internet. And you know whose strategy this is? Wal-Mart's.
So here's what will happen. Microsoft will make a huge effort and get some good press toward the beginning. I'm not saying they don't mean this stuff, just that they don't really know WHAT it means. They'll attract developers and, by doing so, maybe take a small bite out of Open Source, but mainly they'll just sell stuff, which for them is good enough. Then, to maintain earnings growth, they'll turn on their developers. Finally, when things still don't work right, they'll turn on their customers, jacking up prices and milking the monopoly.
I’ve not figured out how to express this, but I think there’s a strong argument somewhere about finding the sweet spot for communications. At one end you’ve got all the semantics in the apps, the stuff on the wire being unintelligible to anything else. On the extreme, if we all used the same object language you could have all the semantics going on the wire, you’d have complete meaning on any (virtual) machine. XML is usually down the first end, but with RDF you can move a bit further up without needing complete prior agreement.
What's really happening in this space? Google has become one of the world's hyperefficient market makers. All these products are just ways to create goods (aka "inventory") to sell on that market, and, by doing so, to raise switching costs on both sides of the market).
Yahoo is not nearly such an efficient market maker. It has struggled to build a market like Google; while it's had some significant wins recently, these have been premised largely on price competition - the fact that Yahoo is a bit cheaper, not that it's market is more efficient.
If they press me for details on this theory (that only happens about half the time) I say that it's as if someone decided to re-invent more and more of Yahoo's popular services in random order, giving them a fresh user interface, less historical baggage, and usually one feature that really stands out (such as Gmail's storage limit or Google Talk's use of Jabber).
We originally announced pricing of Visual Studio Express at US$49. We are now offering Visual Studio Express for free, as a limited-in-time promotional offer, until November 6, 2006. Note that we are also offering SQL Server 2005 Express Edition as a free download, and that this offer is not limited to the same promotional pricing period as Visual Studio Express.
Can I develop applications using the Visual Studio Express Editions to target the .NET Framework 1.1?
No, each release of Visual Studio is tied to a specific version of the .NET Framework. The Express Editions can only be used to create applications that run on the .NET Framework 2.0.
Here's how it goes inside Microsoft:
Bill Gates: "We need web services!"
Jim Allchin: "What are you talking about? We'll just build them into Longhorn, which will ship in 2003, 'er 2007."
Bill Gates: "Off with his head!" (Allchin is carried from room.)
Bill Gates: "We need web services!"
Ray Ozzie: "I'll give you web services. This is Microsoft, home of a million technologies. I'll be right back." (Leaves room.)
Ray Ozzie: (returning) "Sorry, no web services after all. They were perceived as being in conflict with Windows and Office and therefore purged."
Bill Gates: "Bring me the checkbook!"
And so Microsoft business development minions are scurrying everywhere looking for companies to buy that have products to redeem the promises Microsoft has already made.
Microsoft will spend whatever it takes to retain control, which could mean ANYTHING. Seriously, ANYTHING. Windows for free? Don't be surprised if it happens.
Therefore, the network FX are indirect because the utility of the network increases to advertisers through the increase in the number of common users.
Using the new Sun Grid service, virtually any consumer with a Web browser will be able to upload proprietary documents, and have them automatically converted to Open Document Format (ODF).
Keep in mind that I'm suggesting Java will be dead like COBOL, not dead like Elvis. For the hardest enterprise problems, Java is safe for at least three to five years--things like sophisticated and scalable object relational mapping, two phased commit, and the like. Java is being threatened in a much more common, and I think important space: how do you build a simple web application that fronts a relational database? Especially a database schema that you control? This industry solves this particular problem over and over, and Java's not very good at it.
Second, why can’t URLs be used to identify people? Homepage URLs are as meaningful as email addresses (or hashes of email addresses, as I believe FOAF uses), so why can’t they be used to identify people?
What happens if someone made a statement about the URL, say providing its dc:creator?
The use of a URI is questionable for the purpose in any case, see Identifying things in FOAF.
XFN markup isn’t really any more human-friendly than FOAF’s RDF/XML, and what’s more without a machine to interpret the stuff there’s not much for the human either way.
1) We look at the world only through a businessperson’s eyes.
2) We have no clue about the power of influentials.
Can Microsoft be seriously suggesting that partners and customers will be able to connect to these services from any platform, and mash them up however they like (subject to paying the required subscription fees or accepting the obligatory advertising, whichever applies)? I'm not sure that Microsoft really does have the nerve to carry this through, but it should: its consummate expertise at fostering developer ecosystems could really light a fire under its on-demand platform if it gets the API offering right. That would make today's announcement one of those landmark dawn-of-a-new-era moments: the day the great Microsoft mash-up began.
Does ship-early-and-often really work for a huge company doing massive PR pushes that's going to get millions of people checking out their early release?
They’ve one or two high-profile setbacks recently, so this is probably around Plan C. I’m sorry, but I get the distinct impression that they’re still clinging to the sinking ship of Microsoft as Platform, and this looks like an act of total desperation following the realisation that the Web wasn’t under their control.
If this is their 5-year plan, then that’s also probably their remaining lifespan as one of the giants in the industry.
My favorite line, from Ray Ozzie: "Some say that the internet itself is the platform, and in many ways that's true. The internet has always been described as a network of networks, and it's now becoming a platform of platforms, as every web site is potentially a platform."
Not a bad idea, but there are a few problems with this strategy.
1) The market is not huge
2) There are many (many) substitutes, most of which are open-source (=free)
3) Most of the end-user markets are winner-take-all markets; ie, there's not a huge gap for a Metafilter, in, say, finance - Mefi's already got it covered.
4) But the biggie is really that Ning is a layer commoditizer. Ning's bet is esentially the peer production/cheap coordination bet - that the core atomizes, and so value shifts to the edges of the value chain, and Ning will be able to grab a share (somehow). Positioning as middleware contradicts these economics.
The platform issue got escalated, as it had the potential to cause major problems. As it turned out- the answer was simple. Our project leader simply said that the company uses Microsoft products and that means that all diagrams, wireframes, etc. need to be done in Visio because of future hand-off to enterprise teams.