Monthly Archives: June 2004

OSS .NET implementation Mono 1.0 released

      Comments Off on OSS .NET implementation Mono 1.0 released

After three years of development (nearly to the week), Mono 1.0, the .NET implementation for Linux and other UNIX-like systems has been released. The objective of Mono is to enable Unix developers to build and deploy cross-platform .NET Applications. HangZhou Night Net

Sounds simple right? Yet Mono has brought with it a stigma of misunderstanding and complexity. What is Mono? If you’re a Windows fan, Mono lets you run your favorite .NET applications in Linux out of the box, right? If you’re a OSS fan, Mono is a perfect bridge to bring Windows developers into the fold, right? From the Mono project site:

Mono consists of:

A cross platform ECMA CLI runtime engine.A cross platform IKVM Java runtime
engine.C# 1.0 compiler.Development toolchain.Class libraries implementing the .NET 1.1 profile.The Gtk# 1.0 GUI programming toolkit.Mono specific libraries.Third party convenience libraries bundled with the release.GNU Classpath for the CLI.Visual Basic runtime.

From its inception, Mono has generated skepticism within the Open Source Community. Richard Stallman, founder of the Free Software Foundation, has demanded an explanation of what Mono was supposed to accomplish. IBM and Sun, who are normally very vocal about every other OSS project, have been atypically silent.

Mono, as an entire project, covers two areas. Its mantra is "There Are Two Stacks": the Microsoft-compatible stack, which formerly used WINE for compatibility, but now uses new, managed code as well as other technologies to bring a near-compatible framework to developers; and the "Free" stack, brings products like GTK#, Gnome#, Gecko#, and other Free Software projects into the fold. Open Source developers will undoubtedly will be constrained to the "Free" stack, while the "Microsoft" stack will be marketed towards existing developers looking to move towards a UNIX-based system.

Along with the 1.0 release comes MonoDevelop, which brings a feature-complete Integrated Development Environment (IDE) to the mix. Initially based on SharpDevelop, MonoDevelop has quickly matured into a full-featured port, and now functions as a showcase of what is possible with Mono. Precompiled packages for SUSE 9, SUSE 9.1, Red Hat 9, SLES 8, Fedora Core 1 and 2, and Mac OS X are now available.

The .Mac gap

      Comments Off on The .Mac gap

When 15GB is not enough

It’s interesting how the Worldwide Developers Conference has become the new Macworld New York, especially since the old Macworld New York (which is the old Macworld Boston) has become the new Macworld Boston. Last year at the WWDC, Apple introduced us to Panther and the Power Macintosh G5. This year Apple CEO Steve Jobs showed off Tiger (OS X 10.4) and new aluminum LCDs to match the G5 towers. This is a good thing because I have been fretting about my 23″ Cinema Display clashing with my Dual 2.5GHz G5 tower once it arrives, and had been anxiously watching HGTV in hopes of getting some decorating tips on how to wed the two disparate styles.HangZhou Night Net

Part of the Tiger preview revolved around integration with .Mac. Instead of using iSync to keep your bookmarks, Address Book, and calendar synchronized, Tiger replaces it with Sync Services. Sync settings can be controlled from within Sync Services aware applications, and the synchronizations themselves will be done transparently in the background. Users will be able to fine-tune the .Mac synchronization as well, creating different settings for each of the Macs they have set up to synchronize via .Mac.

That’s swell, Apple. Now where’s my gigabyte of e-mail and file storage?

In the beginning there was iTools

And it was good.

When iTools was unveiled at Macworld San Francisco in January 2000, it was actually a good deal. For the low, low cost of… nothing, Mac OS 9 users got 20MB of storage via the iDisk and a mac.com e-mail account. Of course, the iTools e-mail service was POP only, meaning no web mail, but iTools users would still be able to access their e-mail via any web-based service that allowed for checking external POP accounts.

iTools’ e-mail offerings improved over time. In March 2001, Apple began providing SMTP services. That made Mac.com e-mail accounts more portable, as users didn’t have change their SMTP settings when traveling. Apple then transitioned the service from POP to IMAP, allowing for easy access from any e-mail client supporting IMAP connections. Finally, May 2002 saw the beta of .Mac Web mail. Finally, the iTools e-mail offering was ahead of the game. In addition to the browser-based access offered by services such as Hotmail and Yahoo!, iTools also provided a whopping 10MB of e-mail space. Ten megabytes! Who could ever conceive of using all of that space?

And iTools begat .Mac

And it was still (sort of) good.

All good (free) things come to an end, especially if you’re a Mac user. In July 2002, Apple announced that iTools was being replaced with something called .Mac. The good news was that subscribers’ iDisks got larger (to 100MB) and their e-mail storage capacity grew as well (to 15MB). The bad news was that the service would now cost US$99 annually.

To take some of the sting out of charging for what was a free service, Apple offered value-added items such as anti-virus software (Virex, for the hordes of OS X virus in the wild); Backup, a simple backup utility that would back up files to your iDisk and other storage devices; some free photo prints for users of iPhoto; additional e-mail accounts for US$10 a pop; and more. As expected, there was much outcry throughout Macland about Apple charging for what was formerly a free service. There were predictions of mass defections, and after the 60-day .Mac trial accounts expired, many users decided that forking over a C-note every year was not worth what they were getting.

However, many other iTools subscribers stuck around for .Mac. I was one of them. Why did I keep it? Part of it was inertia ? my mac.com e-mail address had been my primary address since iTools was launched and I didn’t really want to go to the trouble of telling everyone it changed. I also liked my iDisk. Having a child and with the grandparents 1,000 miles away, things like being able to painlessly publish photos and movies made .Mac very attractive option to me. The extras? Well, we ordered the photo prints (and they turned out nice), and I have faithfully kept my Virex virus definitions up to date. On the other hand, I have weathered many a web mail outage (especially in late 2002 even after the beta concluded), and have suffered through the somewhat-painful (although improving) WebDAV implementation on Mac OS X. The tight integration with the OS is great. In fact, .Mac has been a good value. But that is changing…

Excuse me, did you say one gigabyte?

In what some thought was an April Fools joke, Google announced Gmail. The details have become well-known by now: 1GB of storage space, ads served based on the content of the e-mail, a nice web-based interface, and Gmail invitations selling for up to US$70 on ebay at one point.

In launching (beta, so far) Gmail, Google instantly upped the ante on e-mail offerings. One gigabyte of e-mail storage was far more than any other account, and it was free to boot. Never again (or at least not for several years) would anyone need to worry about deleting e-mails to stay under a meager storage limit. Questions arose about the costs of providing a gigabyte for each of what will be millions of users, but Google claimed to be able to pull it off for under US$2 per subscriber. Keep in mind that figure only covers those who actually use most of their space ? many users will never even come close to that.

Eventually, most of Google’s competitors responded in kind. Earlier this month, Yahoo upped its e-mail storage limit for subscribers to 100MB (Mail Plus members get 2GB for US$19.95). Hotmail then followed last week with an increase all the way up to 250MB. Other services (such as Spymac) are also now offering a gigabyte of mail storage to their customers. In short, Google single-handedly forced many of its new competitors to alter their strategy of steering customers who wanted extra storage into paid subscriptions. Instead, Hotmail, Yahoo, and others are being forced to give more for less.

Just about everyone but Apple, that is.

The .Mac gap

The moment other services started following Gmail’s lead, .Mac’s value proposition became weaker. As I was cleaning out my e-mail box last month to ensure I stayed below the 15MB limit, I thought about how this hassle would disappear from my life if I only had more space. Chances are, I’m not the only one thinking such thoughts.

The present incarnation of Apple is focused on maximizing revenues. .Mac has fit nicely into this strategy: in his WWDC Keynote speech yesterday, Steve Jobs remarked that they have 500,000 paying customers for .Mac. Assuming that they all pay full price for the service, that works out to roughly US$50 million per year of revenues. While that is a relative drop in the bucket compared to the US$1.909 billion in revenues Apple had for the second quarter of FY 2004 alone, .Mac likely contributes more than that to Apple’s profitability. It’s a relatively high-margin product.

Would people bail on .Mac because of the e-mail storage issue? After all, .Mac offers a host of other features: iSync, Backup, iDisk, the Mac OS X Learning Center, and a tight integration into OS X (which will become even tighter in Tiger). Problem is, how many .Mac users actually take advantage of those items? Those who partake of the entire .Mac experience are likely to stick it out. It’s those who use .Mac primarily for e-mail that Apple needs to worry about.

What percentage of .Mac subscribers are in it primarily for the e-mail? It’s difficult to say. However, an admittedly unscientific poll I started in Macintoshian Achaia showed that 50% of the respondents use .Mac primarily for the e-mail. While the sample size can hardly be considered representative, if even 25% of current .Mac users are subscribing primarily for e-mail, Apple should be concerned about that.

Sure, people have been able to roll their own .Mac for some time: just sign up for a US$5 per month web hosting account that offers IMAP, register a domain, and you’re all set. Better yet, get a static IP (or an account with dydns.com) and host it yourself.

This is different. Once Google gets Gmail working smoothly with Safari (and it works pretty well now), there will be no barrier to entry for any Mac user. Sign up and kiss your e-mail storage concerns goodbye for the foreseeable future. Apple may not consider .Mac to be a competitor to Gmail, Hotmail, and Yahoo, but the truth is, Gmail is commoditizing web mail. Many of those who use .Mac primarily for the e-mail will be tempted to cancel their .Mac account at renewal time and get a free account somewhere else with much more storage. Unless Apple wants to see an exodus of .Mac subscribers, they need to make some changes.

Actually one change.

Increase the storage limit of .Mac. 15MB of e-mail storage is no longer good enough. Like it or not, consumer expectations are changing as a result of Gmail and its gigabyte of storage. Tired of the megahertz gap? Unless Apple makes some changes, they will be staring at a megabyte gap as well. Storage is relatively inexpensive these days, much cheaper than when iTools was first announced four-and-a-half years ago. It’s time for Apple to pass on some of those cost savings to its customers. Give .Mac users 250MB of e-mail space, and another 250MB of iDisk room. No one likes to go through their e-mail folders and iDisks and delete and/or archive stuff. And there’s no reason that .Mac should have to do that with their e-mail.

Making those changes will not only keep customers from defecting to free alternatives, it has the potential to grow the .Mac subscriber base. The current 15MB of e-mail is not attractive anymore. However, 250MB of space along with all the other goodies that go with .Mac will make it a much more attractive offering and likely lead to a larger subscriber base. Yes, Apple will incur additional costs from increasing the current limits. Those should be more than offset by new subscribers.

It’s time to close the .Mac gap.

Quick hitsSpotlight

One of the more interesting features of Tiger is Spotlight. It looks as though Apple has turned its attention towards searching the desktop in much the same way as Microsoft, Google, and Gnome. Spotlight will be accessible from throughout OS X, and will include “intuitive search” capability. Spotlight will combine metadata and indices to return the fast search results, but questions have arisen about how new data will be added to the search index. According to some information made available at WWDC, it turns out that Spotlight’s daemon is hooked into the kernel, so it will get file-related notifications “as they happen.” This should mean that the hard drive won’t need to be reindexed periodically to keep the search results up to date.

Alas ADC, we hardly knew ye

The arrival of Apple’s new displays means a farewell to ADC (Apple Display Connector). ADC carries power, video, and USB signal over a single cable, reducing cable clutter if you have a video card with an ADC connector (and increasing it if all you have is DVI). While I have been a fan of ADC because of the aforementioned features, it unfortunately never caught on much beyond the Cinema Displays (Formac built some beautiful ADC-equipped LCD monitors). Moving away from ADC to standard DVI connectors is a solid move for Apple.

iTunes Music Store in Europe

As nearly everyone knows by now, iTMS finally crossed the Atlantic and landed in the UK, France, and Germany. Preceded by Napster and a host of other European download services (most of which were affiliated with the UK’s OD2), Apple failed to sign some of the independent labels which did not like the contract terms they were offered. Despite that, iTMS managed to sell over 800,000 songs in its first week of operation, more than OD2 had sold in all of 2004 (OD2 was purchased by US digital media company Loudeye last week). It looks like iTMS has translated rather well in Europe, and may be on its way to establishing itself as the online music store of choice in Europe as well as the US. Now if they could only make enough iPod Minis to sell there…

Amazon.com and Toys R Us head to divorce court

      Comments Off on Amazon.com and Toys R Us head to divorce court

Back in 2000 when Toys R Us’ online retail website was a disaster and Amazon.com was still searching for a profit, the two struck a deal that would be beneficial to both. According to the prenuptial agreement, Amazon would get a big-name retailer to lure customers in search of toys, games and baby items, while Toys R Us would get the eyes of millions of online shoppers in return for US$50 million per year for exclusivity rights. Jump forward four years, Amazon is now in the black while Toys R Us is still struggling online after posting an operating loss of US$18 million on US$376 million in sales last year. HangZhou Night Net

Feeling that Amazon broke the prenup, Toys R Us recently sued Amazon for US$200 million claiming it was seeing other retailers (who were also offering toys, games and baby products) on the side. Amazon decided to see Toys R Us’ lawsuit and raised them with a US$750 million lawsuit of their own while also asking for the marriage to be dissolved.

Amazon alleges that its business has been harmed by Toys "R" Us’ "chronic failure" to meet its obligations. For instance, Amazon claims, Toys "R" Us has had trouble maintaining sufficient stock. According to court papers, Toys "R" Us has been out of stock on more than 20% of its most-popular products.

Now that Amazon has turned the corner into profitability and tries to become the eBay of third-party retailers, it feels their exclusive relationship with Toys R Us is a liability. Amazon is apparently unhappy with Toys R Us’ ability to provide products in peak buying seasons and is also disappointed with their selection of products available. Amazon would like to offer toys, games and baby products that are priced to compete with discount retailers like Wal-Mart. Toys R Us haven’t been offering such products and Amazon believes that other third-party retailers could fill that gap.

Amazon claims the agreement allows for exceptions to exclusivity rights, but the courts may not decide the fate of the retailers’ marriage. With Toys R Us’ online division on shaky ground, they may not survive this upcoming holiday crunch without help from Amazon’s hefty user base. Some analysts expect that several long sessions with a marriage counselor should provide reconciliation in time for the upcoming holiday shopping season.

Intel launches 90nm Xeon

      Comments Off on Intel launches 90nm Xeon

Intel Monday unveiled its new Xeon CPUs (codename: Nocona), which are its first x86-64 Extended Memory 64 Technology CPUs to ship. The top-of-the-line 90nm Xeon is clocked at 3.6GHz with an 800MHz FSB, is hyperthreaded, and contains 13 new Streaming SSE instructions and Speedstep. Most notably, the 90nm Xeon runs both 32-bit and 64-bit software. The first batch of new Xeons is targeted at dual-CPU workstations with server versions due out in the next two months. Processors designed for use in four-way (or more) configurations will be available late this year or in early 2005.HangZhou Night Net

So how does the new Xeon stack up against the AMD Opteron? The one advantage it has is its ability to utilize DDR2, which Opteron cannot. On the other hand, the Opteron has an integrated memory controller and consumes less power than the Xeon (we will see if that remains the case once the 90nm Opterons are shipping later this year). Along with the new Xeon, Intel is also launching the E7525 chipset (Tumwater), which features support for PCI Express and the aforementioned DDR2. iWill and Tyan have both announced new motherboards for the Xeons based on the E7525.

AMD has gained some momentum in the server space with the Opteron, with several OEMs, including Sun and HP, offering workstations and servers with AMD’s flagship CPU. The success of AMD’s x86-64 architecture forced Intel’s hand, leading to January’s news that Intel would be offering their own 64-bit Xeons. Intel’s last set of 32-bit Xeons released in March closed the gap on the Opteron, mostly due to the hefty 4MB of L3 cache (benchmarks for the new Xeons are not yet available). In addition, the new Xeons are priced fairly reasonably, at US$851 in quantities of 1,000. What remains to be seen now is how well the Itanium will fare now that the 64-bit Xeon is available. While Intel is confident that the new 64-bit Xeons will not cannibalize Itanium sales, sales of the Opteron have demonstrated the market’s desire for a relatively-inexpensive 64-bit CPU. The new Xeon is aimed squarely at carving out a big slice of that market for Intel.

US Supreme Court bars enforcement of Child Online Protection Act

      Comments Off on US Supreme Court bars enforcement of Child Online Protection Act

The US Supreme Court wrapped up its session by announcing a decision which bars the enforcement of the Child Online Protection Act (COPA). The COPA, passed in 1998, was an attempt by Congress to keep pornography out of the reach of children on the Internet by requiring credit cards, access codes, or other means of age verification to access adult content, with fines of up to US$50,000 for violations. By a 5-4 decision, the Court remanded the case back to a lower court for a trial to resolve the issues raised in the original lawsuit filed by the ACLU, saying that the law as written violates free speech:HangZhou Night Net

For now, the law, known as the Child Online Protection Act, would sweep with too broad a brush, [Justice] Kennedy wrote. "There is a potential for extraordinary harm and a serious chill upon protected speech" if the law took effect, he said.

What is especially interesting about the ruling is that the Court was able to recognize the technological issues at hand in addition to the more obvious free speech ones. Kennedy noted the inadequacies of filtering software and felt that a full trial was necessary to determine if there have been enough in the way of technical advances to make the legislation feasible. (Psst, Justice Kennedy: the answer is "no.")

Presently, it is not known whether the Bush Administration will choose to press on with a full trial. Enforcement of the COPA had previously been blocked by an Appeals Court in Philadelphia on two separate occasions. A previous version of the law passed in 1996 was unanimously ruled unconstitutional, and even further litigation over the current law may very well end up with the same result. Current copyright-protection legislation in the works such as the INDUCE Act — which base part of their rationale on the need to protect children from pornography — could find itself blocked for the same reasons that COPA and previous legislation has.

As a parent of two children and someone who spends a lot of time on the Internet, I know all too well how easy it is for kids to find stuff that they should not be looking at. I am also aware that there is no silver bullet that will magically keep porn and other adult content off their computer. Filters and other tools should only be used as a supplement— the best solution is good parenting. Knowing what your kids are doing on the computer goes a long way.

Microsoft goes after hobbyists with Express developer tools

      Comments Off on Microsoft goes after hobbyists with Express developer tools

Microsoft is targeting "amateur" programmers with the announcement of its Express developer tools. The Express tools consist of streamlined versions of Visual Studio and SQL Server, as well as Visual Basic, C#, C++, and J++. SQL Server Express will run on single-CPU machines and in up to 1GB of memory, allowing for 400GB of storage, but will lack the management and reporting capabilities that the full-featured product will have. Microsoft is hoping that SQL Server Express will prove to be an attractive alternative to the open source MySQL in the low-cost managed server space.HangZhou Night Net

"It is really for a data-driven application where you just want a place to put your data in and maybe have it interoperate with (higher-end versions of) SQL Server through replication," [Director of Product Management for the SQL Server unit Tom] Rizzo said.

The Visual Studio 2005 and SQL Server 2005 are slated for availability in 2005, with beta versions coming soon. The Express tools will be released at the same time. The Express beta is a free download, but Microsoft has yet to announce final pricing. Current student tools do not allow for distribution of commercial software, the Express licenses may (although final licensing terms are yet to be announced).

Keeping developers on the Windows development bandwagon has become a more pressing concern for Microsoft. While Microsoft has scored a hit with .NET, which has caught the eye of many, there are still questions about the future of Windows development. Developing web-based applications has become simpler and more popular, and Longhorn is still a bit of an unknown quantity at this point. Microsoft is hoping that the Express tools will help both shareware and student developers keep the faith. Keeping the price down on the Express tools (say around US$50) will go a long way.

RealNetworks shows some love to Linux

      Comments Off on RealNetworks shows some love to Linux

RealNetworks announced that Linux distributors Novell and Red Hat will begin shipping the open-source Helix media player with their distros. In addition, once RealPlayer 10 for Linux is available later this year, Novell and Red Hat will bundle and support it as well. The open-source (under the terms of the GPL) Helix is the foundation for the upcoming RealPlayer 10, and has been licensed for different commercial offerings (such as the Sony Altair).HangZhou Night Net

RealPlayer has not been a favorite of many over the years. The free version of the player was often difficult to find, it was often difficult to register, and would hijack file associations during the installation process without the user’s consent. Ultimately, some media outlets tired of listener complaints and switched to other formats. Recently, RealNetworks has made some changes, leading our own Fred Locklear to describe RealPlayer 10 as "the least annoying RealPlayer version since 5.0." Will Real’s embrace of open source do anything to alleviate the antipathy Linux users have had for its software?

Haterism aside, the announcement could be good news for both RealNetworks and Linux on the desktop. RealNetworks has been squeezed — especially in the online music download business (remember their asking Apple to open up its Fairplay AAC format?) — by both Apple and Microsoft. Porting their flagship product to Linux while also making the source code available is a solid move — as Linux on the desktop matures they will be in a good position to be a part of that growth, especially if it becomes the de facto standard Linux media player. Conversely, having a "mainstream" app like RealPlayer available should enhance the prospects of Linux on the desktop. The more familiar applications that are available for Linux, the lower the entry barrier to adoption.

NVIDIA re-introduces us to SLI

      Comments Off on NVIDIA re-introduces us to SLI

3dfx, rest its soul, gave us all something to lust after in its Voodoo2 SLI product. Standing for "Scan Line Interleave," SLI technology allowed you to connect two Voodoo2 cards together, and the two would pair on rendering games by each handling every other line on the display (e.g., card 1 would do the even lines, card 2 the odd). Such was geek lust in 1998. In 2004, tag-team rendering looks like it’s going to be all the rage again. HangZhou Night Net

First, of course, there’s Alienware’s Video Array technology, but that’s proprietary and apparently only available in extremely expensive (although well-crafted) systems. NVIDIA wants to change all that. Owing to the advent of PCI Express, the conditions are ripe for multi-card solutions, and NVIDIA saw this opportunity earlier than most, building support for cooperative rendering into the NV40. Connecting two cards by means of a small bridge, NVIDIA’s technology splits the screen in half horizontally to divvy up the workload, but here’s the kicker: the division isn’t fixed. Rather, using load balancing techniques, NVIDIA’s solution assures that both cards are doing as much as they can. If you can imagine 3D games where part of the screen is mostly static (a HUD, a dashboard, your flight controls, etc.), you can see why this is a boon (Alienware’s solution is somewhat similar). So, meet NVIDIA’s Scalable Link Interface, aka, the new SLI.

What’s the price for entry? As you would expect, you’ll need two identical 6800 PCI Express cards and a motherboard that sports two X16 slots. Currently you can’t pick up a consumer-class motherboard that fits the bill, and few people are insane enough to opt for a dual-Xeon solution to get dual X16 support (aside: iWill’s board for Alienware uses one X16 and one X8 slot). Not to worry, however. NVIDIA’s future PCI Express nForce chipsets will more than likely support two X16 slots, and other chipset makers, including Intel, may follow suit. Still, the lack of production motherboards supporting this means that it will be a while before you can get your hands on a proper SLI solution. Indeed, no one has been able to benchmark this aside from NVIDIA, who claims to see a 87 to 100% increase over single card solutions in 3DMark03 and Unreal Engine 3 (respectively). We’ll see about that.

NVIDIA will begin by offering specialized systems through the OEM channel, with attention to the DIY market following thereafter. As for ATI, will they hop on board, too? It’s hard to say. Even if SLI is out of the price range of many gamers, the concept will likely influence buying decisions. The allure of picking up a certain US$300 video card today is much more attractive if you know you can pick up another one for e.g., $200 in a year’s time, and nearly double your performance. SLI might not get used that often, but it will sell cards. More in-depth information (plus a lot of superfluous information) can be found at Tom’s.

IBM tries to stretch the 970 in two directions at once

      Comments Off on IBM tries to stretch the 970 in two directions at once

eWeek is running some good coverage of IBM’s attempts to scale the PowerPC 970 both further into the high end (3GHz and beyond) and into the mobile space. The first story, which deals with the problems IBM has had producing a 3GHz 970, is good, but it focuses entirely on process technology (i.e. defect density, yield, etc.) and neglects an important aspect of IBM’s scaling problems with the new chip: sub-optimal circuit timing. HangZhou Night Net

If you read my original round of 970 coverage, including the subsequent interview with the chip’s designers, then you know that the 970 contains at least two undesirable artifacts of the heavily automated approach that IBM used to create and tune the chip’s layout. The first of these artifacts is the added Altivec (or VMX) instruction latency, which stems from the die placement of the Altivec (or VMX) unit.

The second artifact, which is the one that’s important for our purposes here, is that the circuit timing isn’t as well optimized as it could be for higher clock frequencies. When it comes to getting a chip’s clockspeed up into the stratosphere, there’s no substitute for careful, hand-tuned optimization of the chip’s layout. The problem with such careful optimization is that it takes serious time and effort, and it isn’t really an option for a chip that’s under the kind of time-to-market pressure that the 970 was under. Because IBM needed to get this design out the door, they relied heavily on their automated tools at the expense of clockspeed headroom and some extra Altivec latency.

I fully expect that the next full iteration of the 970 will be more optimized and will scale more easily in terms of clockspeed than the current design.

At any rate, I said all this just to make the point that any process migration problems that IBM and others may be having in the 90nm transition aren’t the whole story when it comes to the 970’s clockspeed.

The second story, which I don’t have too much to say about, is that IBM has invested some serious design effort into dynamic, software-controlled frequency scaling as a way to reduce the 970’s average power dissipation enough to get it into a laptop. Such efforts are going to help get the chip into the laptop form factor, but they aren’t by any means the whole story in terms of overall system power draw and battery life. The 970’s high-speed frontside bus and memory subsystem don’t run on love and sunshine. Nope, you gotta have juice for the motherboard, too, and more juice than the G4’s much lower-bandwidth buses require.

The take-home message here is that the G5 Powerbooks just got a little closer, but I wouldn’t expect the initial battery life on these to be much better, if it’s better at all, than the current G4 Powerbook line (and, for what it’s worth, it’ll much worse than what a hypothetical 90nm G4-based Powerbook would get). I’m sure I’ll probably get flamed for saying that, but that’s my prediction.

Also, I should note that it’s looking to me now like Apple may very well go ahead and launch a G5 Powerbook without an on-die DDR controller. We’d been predicting that they’d wait for this move, but with all the WWDC rumors it’s not looking likely. We’ll find out soon enough, though.

Fahrenheit 9\/11: will you go see it this summer?

      Comments Off on Fahrenheit 9\/11: will you go see it this summer?

It’s Friday evening, and from time to time we take off the suits (bunny suits, mind you) and open up the floor for discussion of things off topic. As with all "political content," feel free to ignore it if this kind of stuff gets your blood boiling. You’ve been warned. Heat. Kitchen. And probably plenty of :rolleyes: lie below the discussion link.HangZhou Night Net

Unless you’ve been living under a rock, you know that Michael Moore’s new "documentary" Fahrenheit 9/11 opens in theatres across the US today. It is expected to outperform any such documentary before it, despite the movie’s rather vocal opponents. It’s already setting one-day records in New York City, and today’s opening is suspected to be quite large. The question is: are you going to see it? Why or why not?

I think Roger Ebert has already said it best when he noted that all documentaries have a "point of view." It simply is not the case that all documentaries are unbiased except for those that we don’t like, which we then see as magically "biased," and refuse to call them documentaries. That said, one would be hard pressed to call just anything a documentary, and this is where Moore draws much ire — what are the boundaries between political sabotage, artful propaganda, telling it like it seems, and proffering truth?

In many cases, the answers depend not on principles, but on how far someone is willing to ride along before embracing or rejecting the movie’s content. However, unlike Bowling for Columbine, this time Moore is flat-out telling the public what this is: an op-ed cast in film, aimed at hurting Bush’s chance for reelection. Does such an aim wrapped in the guise of a "documentary" make the film "dangerous," or should we embrace political activism as something that encourages thought, discussion, and dare I say it, dissention?