The T-Mobile G1 Android-powered phone is the first mobile phone with the operating system (OS) designed by Google.
Given Google's reputation as a trendsetter, I expected great things from its first cellphone, especially since it is emerging more than a year after Apple launched the iPhone.
While it's far from perfect, the G1 powered by Google's Android OS is packed with consumer-oriented features that may even make iPhone fans take notice.
Made by Taiwan's HTC Corp, the G1 was released last month in the United States, at US$179 on a two-year contract.
Beneath the touch screen is a slide-out QWERTY keyboard for those who prefer the feel of keys rather than virtual ones on screen.
From the start, the G1 was easy to use. It includes an intuitive interface and many of Google's familiar services, like search, Gmail and Google Talk. There's also Google Maps, which is enhanced by a compass that lets you see locations in the Street View feature by moving the phone as you hold it.
I had no trouble doing things like instant messaging my friends, searching for stores, and yes, making phone calls. There is a good-looking browser that is simple to navigate, and the device's screen is sharp.
The downside of all the talking, surfing and content downloading is that the battery can go flat.
A key element is the Google-run Android Market, which lets third-party developers offer add-on programs and games that you can download wirelessly to the G1.
I liked a few applications, especially the Barcode Scanner that uses the G1's 3-megapixel camera to read the UPC barcodes on things like product boxes and book jackets and then links you to web searches.
True smart-phone greatness can take time, and I'm willing to cut Google a little slack. After all, the first iPhone wowed, but it was not without issues. So I'm optimistic the G1 will improve soon.
The 32nd edition of the Top 500 supercomputers lists was released late last week and Linux-based systems occupy 439 of the 500 positions. Other Unix variants, including BSD-based systems occupy another 24 positions.
The 32nd edition of the biannual list is also notable as the first time a Windows-based supercomputer has made it into the top 10 positions. That machine sits at number 10 on the list and is based at the Shanghai Supercomputer Center. Windows-based machines account for a total of 5 of the top 500 supercomputers in the world.
The remainder of the top 500 positions were taken up by mixed-environment systems and a single Mac OS system.
At the top of the list of 500 is the 1.105 petaflop/s IBM supercomputer, nicknamed Roadrunner, based at Los Alamos National Laboratory. In second spot was the Cray XT5 supercomputer, called Jaguar, at Oak Ridge National Laboratory.
For supercomputer fans, here are some more numbers to mull over:
A total of 379 systems are using Intel processors. The IBM Power processors and the AMD Opteron family are almost tied in second place with 60 and 59 systems each. Quad-core processor-based systems are starting to dominate the top 500. Already 336 systems are using them while 153 systems are using dual-core processors, and only four systems still use single core processors. Already seven systems use IBM’s advanced Sony PlayStation 3 processor with nine cores. HP took over the lead in systems with 209 systems over IBM with 188 systems. The entry level for the list has moved up to the 12.64 Tflop/s mark on the Linpack benchmark, compared to 9.0 Tflop/s six months ago. The last system on the current list would have been listed at position 267 in the previous Top500 just six months ago. The US dominates the charts, accounting for 291 of the 500 systems. The European share is now 151 systems and Asia accounts for 47 systems. Africa has just one system in the top 500 which is the South African IBM BlueGene computer based at the Centre for High Performance Computing in Cape Town. That runs Suse Linux Enterprise 9 and comes in at a credible position 128.
The first build created by Dev Team is now running on iPhone, iPhone 3G, and the original iPod touch in what's considered a "draft" version.
The software primarily includes the main Linux 2.6 kernel as well as rudimentary graphics, serial, and other functional drivers that are just enough to get a command line running when input is sent over the USB interface; the accelerometer, audio, networking and even the touchscreen have yet to receive any kind of software support.
Developers have also made a basic multi-boot front end known as OpeniBoot that lets users toggle between Apple's own operating system and an alternative platform.
While only just beginning, the project is the first known instance of a non-OS X operating system running on Apple's touchscreen devices where previous modifications have so far been limited to jailbreaking and unlocking handsets.
It also promises to expand in the future: the Dev Team is hoping to run Google's equally Linux-based but more complete Android mobile operating system on the iPhone and is searching for programmers to help with the project.
Open up a terminal and switch to root user. Suppose : xx:xx:xx:xx:xx:xx = new mac address you wan to assign to you box example: 1a:2b:3c:4d:5e:6f Characters allowed in mac address: 1 2 3 4 5 6 7 8 9 0 a b c d e f enter the following:
# ifconfig eth0 down # ifconfig eth0 hw ether 1a:2b:3c:4d:5e:6f # ifconfig etho up
Check the new MAC address by using following command
DenyHosts is a Python script that analyzes the sshd server log messages to determine what hosts are attempting to hack into your system. It also determines what user accounts are being targeted. It keeps track of the frequency of attempts from each host.
Additionally, upon discovering a repeated attack host, the /etc/hosts.deny file is updated to prevent future break-in attempts from that host. An email report can be sent to a system admin. Installation: You will need to run DenyHosts as root (in order for DenyHosts to update /etc/hosts.deny and read entries from /var/log), so you first must become root. Once you have either logged in as root (or used su - root, for instance) you can then run the following command:
# crontab -e
The above command will launch the crontab editor. To launch DenyHosts every 20 minutes you would then add the following line to the crontab:
You will need to substitute your site-specific paths above. As an example, if you installed DenyHosts in /usr/local/etc and maintain your configuration file there as well, then the following crontab entry would be appropriate:
Announced several months ago, the first disk SSD with a capacity of storing 256 GB just come into production at Samsung. The manufacturer, a specialist in semiconductors, joins Toshiba, while the characteristics of this new DSS are the least attractive.In addition to its large storage space, this disc, a 2.5-inch format, use of MLC NAND memory and will have an interface-Serial ATA 3 Gb / s.The rates announced by the manufacturer of 220 Mb / s read and 200 Mb / s write for a power consumption of 1.1 watt.Faced with SSD drives of 64 GB and 128 GB Samsung signed, this new disc is twice as fast.Weighing in at just over 81 grams, no price has yet been announced by Samsung.
Kingston Technology Company, Inc., the independent world leader in memory products, today announced it is shipping its high-capacity 64GB DataTraveler(R) 150 (DT150) USB Flash drive. DT150 offers the largest capacity in Kingston's entire line of DataTraveler USB drives and allows users the room and flexibility to backup important hard drive contents, and transport and share complete collections of music, videos, photos and documents in one convenient device.
"The new 64GB DataTraveler 150 takes transportable storage to the next level with big capacity in a small package," said Jaja Lin, Flash business development manager, Kingston(R). "As file sizes increase with digital media content such as music and photos, the need for USB Flash drives with high capacities will continue to rise. The DT150 certainly addresses those needs today." Kingston's DataTraveler 150 is fully compatible with Windows Vista, XP and Windows 2000 as well as Mac OS X 10.3 (and above) and Linux 2.6 (and above). The DT150 carries a fully guaranteed five-year warranty and 24/7 tech support.
Kingston DataTraveler 150 Part Number MSRP (U.S. only. Prices subject to change.) DT150/64GB $177.00 DT150/32GB $116.00
DataTraveler 150 Product Features and Specifications: -- Capacities*: 64GB, 32GB -- Dimensions: 3.06" x 0.9" x 0.47" (77.9mm x 22mm x 12.05mm) -- Operating Temperature: 32 degrees F to 140 degrees F (0 degrees C to 60 degrees C) -- Storage Temperature: -4 degrees F to 185 degrees F (-20 degrees C to 85 degrees C) -- Simple: Just plug into a USB port -- Convenient: Pocket-sized for easy transportability -- Guaranteed: Five-year warranty -- Compatible Operating Systems: Windows Vista (Windows ReadyBoost(TM) not supported), Windows XP (SP1, SP2), Windows 2000 (SP4), Mac OS X v.10.3.x and higher, Linux v.2.6.x and higher
Microsoft's True Type fonts can also be used on Linux. Not only can they improve visibility, they are essential for proper look and layout if you are using Internet Explorer on Linux for testing etc. The installation is very simple… well almost:
Microsoft Corp. this week will reveal new technology to deliver rich media applications on the Web, part of a broader strategy to go head to head with Web and design tools powerhouse Adobe Systems Inc.
As described by Forest Key, a director of product management for Microsoft's Server and Tools Division, Silverlight is a browser plug-in that allows Web content providers to offer rich video and interactive media experience from directly within Web sites. The technology, which leverages Vista's new graphics framework Windows Presentation Foundation (WPF), will debut at the National Association of Broadcasters (NAB) conference, being held this week in Las Vegas.
Microsoft also will unveil Web content providers who have signed up to use the technology once it is available, including Akamai Technologies, Brightcove Inc., Eyeblaster Inc., Major League Baseball and Netflix Inc.
Key said Microsoft is targeting three core audiences with Silverlight, formerly code-named WPF/E: content providers that want to distribute video and rich media over the Web; designers and developers that are building rich interactive applications; and end users that want the best possible experience when viewing Web-based media.
Silverlight is compatible with a range of browsers, including Internet Explorer (IE), Safari and Firefox. As demonstrated by Key, the technology delivers a similar user experience on both IE 7 running on Windows Vista and Firefox running on an Apple Macintosh computer. In fact, a big benefit of the technology for end users is that they will not have to download different video player technology to view online media based on what OS they are running, Key said.
Microsoft is highlighting the video-delivery capabilities of Silverlight at NAB, but the company plans to show how companies can use Silverlight in a similar way to Adobe's Flash to deliver Web-based applications that use animation and other rich media, Key said.
Microsoft also plans to optimize other components of its software platform to add value to Silverlight. For example, the forthcoming Windows Server, code-named Longhorn, will include as a plug-in the IIS7 Media Pack, which adds new features to enhance and reduce the cost of delivering rich media over the Web.
Microsoft's Expression toolset to build rich Internet applications -- which Microsoft is pitting as an alternative to Adobe's recently released Creative Suite 3 -- also is key to Silverlight because designers will use it to create application to be delivered through the technology. Expression should be generally available in June.
Keith Cutcliffe, IT developer and analyst for ProAssurance Corp. in Birmingham, Alabama, is skeptical that Microsoft will ever gain the faithful user base Adobe has. However, he said that enterprise customers that have developed Flash applications to run on Microsoft-based Web infrastructure eventually may use Silverlight and Expression instead because of the underlying back-end platform ties.
Scott Stanfield, CEO of application development firm Vertigo Software, seems supportive of that sentiment. He said Silverlight fills a major gap in Microsoft's strategy to provide a mechanism to deliver and build applications that provide the stability of desktop applications with the user experience of media-rich Web applications.
"Previously Flash was the only answer," he said. "Now Silverlight becomes a viable alternative."
Microsoft will deliver a beta of Silverlight at its MIX 2007 conference at the end of April, and will announce plans for general availability at that time, Key said.
Yesteday is a historical day for Linux users, as Adobe has finally decided to listen to them and released a 64-bit version of its Flash Player. Until today, 64-bit Linux users had to install the 32-bit version of the Flash Player, which was forced to work with the help of the NSPlugin wrapper package and the 32-bit libraries.
However, this could cause some issues, and I'm pretty sure that users of the 64-bit Ubuntu 8.10 (Intrepid Ibex) operating system know about the "grey box" problem of the Flash Player plugin, and are aware of the fact that you had to reload the page to see a flash movie, or even restart your browser... which was very annoying in some cases.
The first alpha version of the 64-bit Flash Player plugin from Adobe has been released today, a few minutes ago, and it is available for download here. At the moment, it is only available for English users, and Adobe stated that it would only accept feedback for the plugin in the English language. Moreover, the 64-bit version contains all the features from the 32-bit edition!
How to install?
· Download the 64-bit Flash Player for Linux from here and save it on your desktop; · Close your browser; · Extract the archive and you will see a libflashplayer.so file; · Open your Home folder and go to "View -> Show Hidden Files," or hit the CTRL+H key combination to view the hidden files and folders; · Look for the .mozilla plugin, open it and create a folder called "plugins" (without quotes); · Drag and drop the libflashplayer.so file from the desktop to the .mozilla/plugins folder; · Open your browser and verify the installation of YouTube or any other website with flash content.
I Just Love its apperance .......good colour combination !!
Amazon will be selling the OLPC (One Laptop Per Child ) XO laptop in 30 countries starting on Monday, according to the latest news reports. This would be the first time when the laptop is made available to the general public from areas other than U.S. and Canada. As previously reported, the laptop will also be relaunched on Amazon for the U.S. and Canada markets, while the website plans to sell it in the program called "Give one, get one." In addition, customers will also be able to purchase a single machine to donate it.
Around 150,000 XO laptops were produced last year in the "Give one, get one" program. OLPC, or One Laptop Per Child Association Inc., is a nonprofit organization that works on offering the low-cost XO to children in developing countries. The price of the XOs will be $399 (£254 or €312) for Europe and other few countries. Those willing to buy a single machine to donate it will pay $199. According to the organization, there will be no VAT (value-added tax) charged, and transactions will be processed and billed by Amazon.
The XO laptop will be available for Austria, Belgium, Bulgaria, Cyprus, the Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Poland, Portugal, Romania, Russia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey and the U.K.
According to OLPC's wiki page, the program does not have a scheduled end date. Last year, the program has been running from November until the end of December. The XO laptops have been designed to withstand harsh environmental conditions and are packed with a battery that lasts up to 21 hours. Moreover, the organization says that the machine can be powered by solar panels, foot pedals, and pull-strings as well.
The laptops will feature the latest edition of the Sugar interface on a Linux operating system based on the Fedora Core. OLPC says that the laptop does not allow dual boot Windows and Linux. Users will be able to choose between a U.K.-style power adapter and one designed for continental Europe. Although the organization struggles to reduce the price of the laptop to $100, production costs put it at $200.
<eAR OS is a state-of-the-art Linux operating system. It can run directly from a Live-CD and optionally be installed to a hard disk. That means you can try it out before you install it - for FREE!
eAR OS comes with the very advanced and beautifully simple to operate eAR Media Center. eAR Media Center is running out-of-the-box. Tune in the digital TV programs and rip some CD's to the hard disk in lossless FLAC quality, watch Digital TV and DVD's, listen to Internet Radio, view Photos, listen to Music while surfing the Internet, and Enjoy.
OS is an installable live CD. The boot options are similar to those of Ubuntu 8.04 and the installer is exactly the same.
I had high expectations for the Media Center, since it is the focus and purpose of this distribution. It opens full-screen with a list of multimedia tasks available, including Listen to Favorites, TV, DVD, Video, Music, Radio, Photo, and More. The menu list is navigable with the arrow keys or by using the mouse. If you'd like to access the regular desktop without shutting down the Media Center, you can use the Ctrl-Alt-Arrow key combination to change to another virtual desktop.
But you may not have to shut down the Media Center or arrow to another desktop, because under the More heading is a menu of commonly used applications that can be launched right from the Media Center. Key portions of the desktop will show as well, such as the panel and Simdock docking bar. This is handy in expanding the functionality of the interface, particularly on a computer that has been dedicated to being a home theater machine, where one would might prefer to leave the Media Center visible at all times. Some of the applications available include Firefox, OpenOffice.org, Pidgin, GIMP, Skype, and the GNOME Control Center.
The Listen to Favorites choice displays your personal playlist. It ships with a few examples, but you can add online radio stations, digital broadcast channels, on-disk music or videos, or music CD tracks to your Favorites. Depress the "p" key to access the list of Playlist Commands.
TV is for watching Digital Video Broadcasts using Kaffeine. That application must be able to detect and configure your TV card. I would like to see support for BTTV broadcast tuner cards included. I could use xawtv since Kaffeine doesn't support analog cards, but it's inconvenient and disappointing that this functionality isn't integrated into the eAR Media Center. As a compromise I was able to make a launcher in the eAR-More directory for xawtv so that it would appear in the More screen menu, but my family found it confusing to watch television through the More menu instead of the TV. Another alternative is to connect to a traditional television set (through S-Video or VGA-out for example), but that would require flipping the video input on the TV to watch, which is far from an integrated solution.
DVD is, obviously, for watching DVDs, and this choice too uses Kaffeine. Encrypted DVDs are no problem. The "d" key brings up the DVD menu, which gives access to other features available on a DVD. "s" controls subtitles, "f" raises the volume, "l" lowers the volume, and the space bar pauses and resumes a movie. The arrow keys fast-forward or go back in the video.
The Video menu lets you watch videos on your hard drive. Clicking it brings up a screen with several listed folders. Files should be stored in the subdirectories of /home/earmusic/eAR-Video, as there doesn't appear to be any way to add additional directories. All of the formats I tested (AVI, MPEG, and MPEG-4) played without issue.
Like Video, Music opens a screen of folders where your music should be stored. Also like Video, the files are stored in the /home/earmusic directory, and I didn't find a way to utilize directories of music stored elsewhere. You can listen to audio CDs too. While at the main screen with Music highlighted, if you use the right arrow key or move your mouse rightward, the entry will change to Listen to CD. If you choose that, Soundjuicer opens to play or extract the tracks.
Radio is for listening to Internet radio stations. eAR OS comes with an amazingly comprehensive list of stations to use, which is nice as I didn't see an easy way to add others.
Photo is for viewing a directory of images. As with Video and Music, images are stored in a hard-coded directory, specifically /home/earmusic/eAR-Photos.
The eAR Media Center is extremely easy to use. Although it didn't quite meet all my expectations, it provides extensive multimedia functionality. It looks great and was stable during my rigorous testing. I can forgive it for not supporting older analog TV cards, as they're at the threshold of antiquity and my workaround has sufficed. It comes with so many radio stations configured that everyone's tastes are likely be addressed. The only significant complaint is the hard-coded storage directories. I'd really like to be able to add my own storage directories to the choices.
On November 12th, Nvidia and ATI/AMD, the two major companies specialized in programmable graphics processor technologies, released new versions of their video drivers for Linux-based operating systems.
While the 177.82 Nvidia graphics driver introduces support for a few Quadro FX GPUs and fixes three important bugs, the ATI/AMD company released version 8.11 of its video driver with support for new operating systems and enhancements to the ATI CrossFireX mode. Highlights of Nvidia 177.82:
· The image corruption issue for the Mozilla Firefox 3 web browser was fixed; · The power management problem, which took up to 30 seconds to resume from S3 on some recent mobile GPUs, was resolved; · The hotkey switching issue for some new mobile GPUs was repaired.
Moreover, Nvidia 177.82 adds support for the following new GPUs:
· Quadro NVS 450 · Quadro FX 370 LP · Quadro FX 470 · Quadro FX 4800 · Quadro FX 5800 · Quadro CX
Highlights of ATI Catalyst 8.11:
· Added support for the newly released Red Hat Enterprise Linux 4.7 operating system; · Added support for display scaling. This feature will allow ATI users to resize the display devices that offer support for 480i/p, 720p, 1080i, 1080p TV timings; · A new option was added in the ATI Catalyst Control Center that allows users to verify if an OpenGL application is running in ATI CrossFireX mode; · The issue when the installation could not be complete because the Powersaved function was enabled was fixed.
How to install the Nvidia or ATI drivers?
Log out of your current session and hit the CTRL+ALT+F1 key combination, in order to enter a text mode session. Log in as root (System Administrator), go to the folder where you've downloaded the Nvidia driver installer (see below for links), and type:
Then, follow the on-screen instructions to install the video driver. The Linux headers and a GCC compiler will be required to complete the installation!
On the other hand, the ATI driver can be installed with its easy to use graphical installer.
Download the ATI/AMD Linux Display 8.11 driver now from here
If you think about where Linux is fighting for market and mind-share, chances are you're thinking about Linux slugging it out with Microsoft Windows or Sun Solaris on the server, or trying to tear desktop customers away from Windows, and to a far lesser extent, from Mac OS X. That's all true, but there's also fierce competition between Linux distributions.
What is going to matter to everyone who buys and deploys operating systems is that Novell is heating up its competition with the number one Linux distributor: Red Hat. On November 11th, Novell announced a new subscription and support program "designed to aid customers making the transition from their existing third-party Linux distribution to SLES (SUSE Linux Enterprise Server)." What makes this interesting is that the three-year SLES subscription under this plan also includes two years of technical support for a customer's existing Linux deployments while they make the SLES transition.
That's new. I can't recall ever seeing a vendor offering to support the competition's offering while helping you to transition to their product. It does make sense. This is Linux after all. There are a lot of differences between how Novell handles management with its ZENworks and Red Hat does the same jobs with its Red Hat Network, but underneath the top-level management tools a good Linux administrator won't have any trouble running either SLES or RHEL (Red Hat Enterprise Linux).
Novell claims that this "new program is in response to growing customer demand for help as they make the strategic decision to transition their data center Linux infrastructure from existing third-party distributions, such as Red Hat Enterprise Linux and CentOS, to SUSE Linux Enterprise Server." In a statement, Novell's Justin Steinman, vice president of Solution and Product Marketing, said, "As the Linux market matures, we are increasingly being approached by customers who want to move to SUSE Linux Enterprise, attracted by Novell's award-winning support, superb interoperability in mixed-source environments, and by our support for mission critical applications. This new program makes it even easier for these customers to make the move to Novell." This program is already available today from Novell sales.
I honestly haven't seen that many RHEL or CentOS customers wanting to switch to anything else myself, but I have met some, just as I've met SLES users who were moving to RHEL, or its low-cost clone CentOS. Novell goes on to claim that in "a customer study conducted by independent research firm Lighthouse Research, Novell gained top ratings for overall support, and significantly outpaced Red Hat and Oracle Linux on both timeliness of phone support and support of mixed platforms, open source and proprietary software."
I've used both Novell and Red Hat support. Frankly, they're both good, and I can't call one better than the other. I will say, however, that I've found both Red Hat and Novell to do a better job than Oracle Linux support team. There is one area, where Novell does do a better job than Red Hat. Novell refers to it cautiously as "mixed platforms," "proprietary software," and "superb interoperability in mixed-source environments." What they're really talking about is that, thanks to Novell's Microsoft partnership, Novell SLES does a better job of working in tandem with Windows Server 2003, 2008 and related server/network services like AD (Active Directory).
Joint Windows/Linux support is something that a lot of businesses need. That said, Novell working hand-in-glove with Microsoft doesn't go over at all well with many Linux users. Boycott Novell, after all, which serves as the lightning rod for resentment against Novell and Microsoft working together, is a very popular site.
Be that as it may, Novell, which has profited from its Microsoft relationship, is planning on making even more from it by going after other Linux business contracts rather than Linux's traditional growth market of Unix and Windows Server shops. I'm going to be very interested in seeing how it plays out and what Red Hat will do in response to Novell's aggressive moves.
Google today launched Gmail voice and video chat, making it simple for people around the world to chat in high-quality video for free right within Gmail. All you need is a webcam and a small web browser plugin, and you can start video chatting with your friends, family, and coworkers on Gmail and Google Apps. Gmail voice and video chat lets you start a video chat without switching to another application or signing up for another account. And if you don't have a webcam, you can simply chat by voice. We've made it easy enough that your mom -- or your employees -- will actually use it.
We've tried to make this an easy-to-use, seamless experience, with high-quality audio and video -- all for free. All you have to do is download and install the voice and video plugin and we take care of the rest. And in the spirit of open communications, we designed this feature using Internet standards such as XMPP, RTP, and H.264, which means that third-party applications and networks can choose to interoperate with Gmail voice and video chat.
The launch comes as video communication grows in popularity; many of the latest lines of laptops, for example, come with built-in webcams. Businesses stretched across continents and timezones want more face-to-face collaboration among their employees, but in this economic climate, they're looking for ways to cut travel and IT expenses. Having a meeting with a colleague over video allows communications to continue in person without the expense of traveling there. Whether it's a coworker demoing a new product, or a first-time grandmother saying hello to her new grandson, sometimes there's no substitute for speaking to and seeing someone. Google is offering browser-based voice and video chat as a natural extension to webmail and instant messaging, allowing people to choose how they want to communicate at each moment -- by email, instant message, voice, or video.
Being able to switch from email to chat as needed, all within the same app, is really great for productivity. But people can only type so fast, and even with our new emoticons, there are still some things that just can't be expressed in a chat message.
A research team at Yale University has developed an advancement to file sharing technology. The system has the ability to make Internet Service Providers (ISPs) and Peer-to-Peer (P2P) software providers work together in order to deliver data in a more efficient manner. P4P stands for “provider portal for Peer–to–Peer (P2P) applications”. Developed by Professors Avi Silberschatz, Y. Richard Yang, and Ph.D. candidate Haiyong Xie, faculty members at Yale's Department of Computer Science, the system would make for more precise and uninterrupted communications between ISPs and P2P applications.
Over the past 10 years, the increased usage of the internet has caused computers to run less efficiently, while placing a burden on the existing bandwidth for transmitting data. In 1998, the percentage of Internet traffic committed to the download and upload of large blocks of data using P2P software was only 10 percent. Currently download and upload numbers have increased dramatically to 70 percent. In comparison, the usage of web browsing and e-mail has decreased from 60 percent in 1998 to 20 percent today for web browsing and from 10 percent to 5 percent for e-mail usage.
A picture showing the potential entities in the P4P framework: iTrackers owned by individual network providers, appTrackers in P2P systems, and P2P clients. Not all entities might interact in a given setting. For example, trackerless systems do not have appTrackers. P4P does not dictate the exact information flow, but rather provides only a common messaging framework, with control messages encoded in XML for extensibility (Credit: http://www.openp4p.net)
P4P can lower the cost to ISPs and enhance the operation of P2P applications. According to Silberschatz: “The existing schemes are often both inefficient and costly — like dialing long-distance to call your neighbor, and both of you paying for the call”. To facilitate data transfer, the current P2P information exchange system is “network-oblivious” and uses complex protocols for bandwidth access.
Yang states that, “Right now the ISPs and P2P companies are dancing with the problem — but stepping on each other’s toes. Our objective is to have an open architecture that any ISP and any P2P can participate in”. The project is funded by Yale without immediate financial interest, but joins forces with a working group called P4P, which was formed in July 2007.
The working group is hosted by DCIA (Distributed Computing Industry Association), and is co-chaired by Doug Pasko from Verizon and Laird Popkin from Pando. More than 50 other major organizations are also currently contributing to the venture. Silberschatz reiterates that the new technology has the potential to significantly improve the internet. He comments, "The P4P architecture extends the Internet architecture by providing servers, called iTrackers, to each ISP. The servers provide portals to the operation of ISP networks".
In a field test conducted by Silberschatz and his team using the Pando software in March 2008, the P4P system decreased inter-ISP traffic by an average of 34 percent, increased delivery speeds to end users by up to 235 percent across US networks, and up to 898 percent across international networks.
Silberschatz says that, “ISPs like AT&T, Comcast, Telephonica and Verizon and the P2P software companies like Pando each maintains its independence, the value of the P4P architecture is significant, as demonstrated in recent field tests,” even with the positive outcomes, the companies are still afraid of losing their independence.
With P4P possibly managing downloads within a few years, the only people who should be apprehensive about the development are the recording and movie industry's legal teams. Although users might fear privacy implications of the increased ISP participation with P4P traffic, security breechs are highly unlikely as ISPs already transmit all P2P traffic and can create monitors for the process at any time, as is. Instead, the new system will merely help save money and time for the users.
Advanced Micro Devices (AMD) is developing processors with 12 cores which are targeted for release in the first half of 2010. This new plan has deviated from the original product vision of 8-core chips. The 12-core processor is code-named Magny-Cours. The chip will include 12MB of L3 cache and support DDR3 RAM
During the second half of 2009 AMD is set to release a 6-core chip code-named Istanbul and then jump immediately to a 12-core chip the following year, an AMD spokesman said. “Twelve-core chips will handle larger workloads better than 8-core chips and are easier to manufacture” said Randy Allen, vice president and general manager at AMD.
AMD is also planning to release a 6-core chip in 2010 to complement the 12-core chip to meet requirement of systems that do not need 12 cores. Code-named Sao Paulo, the chip will include 6MB of L3 cache and support for DDR3 RAM. The new chips will be manufactured using 45-nanometer process (already used in Intel’s current generation processors), which should increase power efficiency.
Dean McCarron, an analyst with Mercury Research, explains that AMD, which is struggling financially, is making financial and technical considerations in jumping from 6-core to 12-core chips. He also added that jumping to twice the chips size will allow the company to dump more cores on each chip while delivering better product margins and lowering manufacturing costs. AMD's 12-core chip will contain two 6-core processors on individual chips in a single processor package, McCarron said. That is a more reasonable goal than including 12 cores on a single chip, which can be expensive to manufacture.
The addition also enables AMD to evade competition with Intel in 8-core chips, McCarron said. In the second half this year, Intel is shipping a 6-core Xeon server processor tagged Dunnington; only later would it plan to shift to 8-core processors. Even with AMD's modified plans, Intel will continue to be competition. Intel shipped 78.5 percent of chips in the first quarter of 2008, while AMD held a 20.6 percent market share, a slight gain from the 18.7 percent market share it held in the first quarter of 2007.
The new product direction is a strategy for AMD to recover from recent chip and supply issues. AMD’s latest server chips, the quad-core Opteron processors code-named Barcelona began shipping in late June after numerous delays and obstructions. “Obviously, AMD had some trouble over the past year, but they have a staple of OEMs and routes to markets with their processors. What you're seeing is much more public focus on what's going to happen in the next 18 to 24 months rather than longer term,” said Gordon Haff, principal IT advisor at Illuminata. The company last month reported its sixth consecutive quarterly loss and plans to lay off 1,650 jobs by the third quarter.
Additional information on AMD’s 12-core processor plans can be obtained at the AMD website.
Intel recently unveiled a line of new generation system-on-a-chip designs. The Intel EP80579 Integrated Processor family can be applied to security, mobile Internet devices, storage, communications, and industrial robotics applications. The system-on-a-chip (SoC) products are based on the Pentium M processor. To welcome a latest division of highly integrated, purpose-built, and Web-savvy System on Chip (SoC) designs and products, the Intel executives designed a scheme to apply chip design expertise, factory capacity, advanced manufacturing techniques, and the economics of Moore's Law into the chip manufacturing stages.
Intel’s new intelligent SoC chip design is based on Intel architecture (IA) used by the company’s current processors to run the greater part of the Internet. Compared to conventional SoCs, these products will reach new heights of performance and energy efficiency. In some cases, the chips’ board footprint is 45% smaller and the power dissipation was decreased by 35%.
For each product, there is a 7-year-long life-cycle manufacturing support provided by Intel. According to the company, the SoC can be best utilized by small to medium-sized businesses (SMB) that employ or manufacture security applications, conventional, embedded, and industrial computer systems, and home networks drawing on attached storage, IP telephony, and wireless infrastructures such as WiMAX.
The chips cost between US$40 and US$95, the price depending mostly on clock speed and the type of technology incorporated in the chip. For example, one chip provides Intel's acceleration technology for cryptographic and packet processing, used for enterprise-level and voice over IP (VoIP) applications for security appliances, such as virtual private network (VPN) gateways and firewalls. Users can develop their own security appliances or voice applications, for example, using software drivers and software modules which can be downloaded from Intel.
Intel has more than 15 SoC projects designed and in waiting, with many of the products built around the new Intel Atom Processor Core. Intel’s first Consumer Electronics (CE) chip, named "Canmore", is planned to be launched later this year, and the second-generation "Sodaville" is due next year. "We're now able to deliver more highly integrated products ranging from industrial robotics and in-car infotainment systems to set-top boxes, MIDs and other devices,” said Gadi Singer, Vice President of Intel's Mobility Group and General Manager, SoC Enabling Group. "Best of all, customers and consumers will equally benefit."
As a variety of different gadgets and devices from handheld computers to home health-monitoring devices are increasing in popularity, Intel sees an energetic market that is able to generate billions by providing for the next-generation of Internet-connected devices.
Additional information on Intel’s new line of system-on-a-chip (SoC) products can be found on the company’s website.
Engineers at Massachusetts Institute of Technology (MIT) have developed a method to create and install tiny microbatteries which are the size of half a human cell. Using viruses to generate power, this new type of battery could one day power miniature devices by stamping the batteries onto the devices’ surface. The team has reported that they have successfully assembled and tested two critical components of the battery. They are currently working on a complete battery.
This innovation is actually the first time where microcontact printing has been used to manufacture and arrange microbattery electrodes. Furthermore, it is also the first implementation of virus-based assembly in such a procedure. The technique itself does not require precise conditions (it can be performed at room temperature) and void of any expensive equipment.
The design of the battery comprises of two electrodes; an anode and cathode, which are divided by an electrolyte. At the present moment, the team at MIT have only developed the anode and the electrolyte. The process of creating the battery starts with a clear, rubbery material, where the team used a familiar technique labelled soft lithography to form a pattern of tiny posts either four or eight millionths of a meter in diameter. On top of these posts, several layers of two polymers, which act hand in hand as the solid electrolyte and battery separator, are deposited.
To form the anode, the engineers used viruses that are able to preferentially self-assemble a top the polymer layers on the posts. This idea was adapted from findings published in 2006, where Hammond, Belcher, Chiang, and colleagues reported on how to form the anode. Furthermore, the MIT team modified the virus' genes to enable it to produce protein coats that accumulate molecules of cobalt oxide to shape ultrathin wires together with the anode.
An array of microbattery electrodes, each only about four micrometers, or millionths of a meter, in diameter (Credit: MIT)
The final step was to stamp each tiny post, which were covered with layers of electrolyte and the cobalt oxide anode. The testing of the current system was performed using lithium foil and by turning the stamp over, the transfer the electrolyte and anode to a platinum structure was made possible.
The latest findings are positive with the team concluding that the resulting electrode arrays are able to display full electrochemical functionality. Future plans for this engineering team is to research more on using the battery stamp on curved surfaces and integrating the batteries with biological organisms.
Additional information on this virus powered battery can be obtained at MIT’s website.
Fusion-io has recently introduced the ioSAN, which is the world’s first networked enterprise Solid-State Drive (SSD). The new product makes it possible to extend the raw power of SSD across the network and can be deployed as networked, server-attached storage or integrated into networked storage infrastructure, offering a fundamentally different model for enterprises’ storage management.
The new device was displayed at the DEMOfall conference, held earlier this month, and its main destination market are medium and large enterprises which make usage of networked computing power in order to process various tasks. Unlike the current common storage devices, the new ioSAN utilizes the relatively new SSD technology in order to bring networked computers better storage solutions. Those who might benefit most from the technological improvements the ioSAN offers are companies who use applications that need quick access to data, such as financial services applications and Web services or media editing. However, Fusion-io claims that even the more traditional storage-related applications, such as replication, mirroring, ILM, failover and backup/restoration could improve performance using their new product.
The ioSAN combines ioMemory and Converged Enhanced Networking and utilizes the same PCI-Express (PCIe) form factor as the company's first product – the direct-attached ioDrive enterprise SSD. The result is that the ioSAN functions as network-attached storage, making it possible for server attached storage to communicate between systems over existing network architecture. Therefore, any user can use an off-the-shelf server to create a full-power Storage Area Network (SAN). With multi-terabytes of low-cost tiered storage, high-performance enterprise flash and high-performance enterprise networking the building of systems that can utilize enhanced network bandwidth is made easier.
Using a standard-based, memory-speed protocol over either 10GigE or 40GBps QDR Infiniband, the ioSAN shares ioMemory capacity between servers. With latencies of less than two microseconds, it incorporates an integrated network interface that can dynamically alternate between 10Gb/s Ethernet or 40Gb/s quad data rate InfiniBand. The built-in network interface makes it easy to create networked storage across servers with increased performance and flexibility, and with zero footprints. This networked storage is extremely easy to integrate and manage within existing server infrastructure, since no additional software or hardware is needed.
“With this development, everything you thought you knew about SSD and storage networking is no longer true,” said David Flynn, CTO of Fusion-io. “The ioSAN fuses SSD with storage networking, combining the best of direct-attached and storage networking with the best of SSD and traditional storage. With this revolutionary advancement, Fusion-io has commoditized high-performance network storage in the same way that companies like NVIDIA and ATI/AMD commoditized high-performance graphics processing. Fantastic applications of this technology are now beginning to emerge.”
For more information about the new ioSAN, see Fusion-io’s press release.
If humans may be viewed as the sum total of their memories, then at our doorstep may be a life changing revolution: the ability to store one’s entire life experiences on an accessible and easily searchable file. In this article, we examine this idea, as well as some of the problems involved in its application, and present a unique project towards this end being carried out at Microsoft's research laboratories.
Who doesn't wish to keep a record of a beautiful sunset that particularly impressed us in childhood, our first kiss or, for that matter, an important conversation with the boss that took place a few months back? One of our shortcomings is a constant st ruggle to remember. How difficult it can be sometimes to recall the name of the person you need to meet in an hour, the important phone number your secretary just read you on the phone, or that very important item your wife told you not to forget to bring home this evening. But what if you had a magical device that would allow you to rewind reality and see exactly what happened? A few years back I began thinking about what could bring this dream closer to reality.
The features required to make such a device possible can be divided into three elements: first there is the hardware, which, as we shall soon see, actually presents the smaller set of problems. The software is saddled with a far more complicated series of hurdles, and finally there are social and perhaps legal issues that would inevi tably accompany the large-scale implementation of such a technology.
In terms of hardware, we would require one or two miniature high definition wide-angle cameras, which could take high definition (HD) video, that could be attached to our glasses or sunglasses. (They might look geeky at first but in a few years it might be possible to actually integrate them into the glasses similarly to MP3 and Spy eyewear glasses.) Microphones with the ability to locate the direction from which the sound originates would also be required. Other hardware add-ons could be eye-tracking equipment, which would register what you were looking at in any given moment (using equipment similar to the Stanford Poynter Project, for example), and a GPS receiver, which would inform you of where you where at any given time. Since most of our day is often spent indoors and current generation GPS receivers don't work well indoors, new solutions would have to be found for this problem (such as inc reased sensitivity GPS receivers).
Perhaps the biggest hardware roadblock would be data flow and data storage. Using a wireless transmitter on the glasses to transmit real time HD quality video and audio would require a great deal of bandwidth, but more importantly it would consume a great deal of power, giving the recording glasses a very short operation time. Therefore, until more advanced batteries and power saving wireless broadband would become available, the best solution would probably be a wired connection, which brings up the second problem – storage. A DVD-quality video consumes about 4 MB/s, even with real time compression, we would still end up with around 1 MB/s if we would like to preserve a reasonable HD quality. If a full day (which would surely require more than one battery) were to be recorded, approximately 57 GB of storage space (1 MB/s for about 16 hours) would be required. Although this is within the limits of current day portable hard drives, it is still very expensive (about $25 per day on storage alone at today's prices).
But hardware issues are slight in comparison to the problems on the software end. Let's say that you have recorded a full month of your life and would like to find what you said to a friend in a meeting though you don't recall when or even where it took place. You could of course start running the 480 hour video and look, but even fast forwarding, it probably would not be a very good use of your time. So how can you search a 1.7 Terabyte video for a few specific seconds? Well, currently there is no simple solution to this problem but there are a few technologies that if integrated might help. Voice recognition technology has been evolving for many years and recently has attained a pretty acceptable level. However it usually requires training and relatively ideal conditions (a microphone in front of your face and a fairly noiseless environment). For our purpose, the voice recognition software would need to be able to recognize what different people are saying without training and in less than ideal conditions (outdoors, in noisy crowded places, etc.). The software would also have to perform another trick, that is, recognize voices without long training sessions. Facial recognition, could be implemented to enhance this. By combining the information from the voice recognition and facial recognition softwares, it would be possible to analyze exactly who we were talking to and what was said. The eye tracking movement hardware could also provide important information in this respect. GPS could add another level by stating where the conversation took place and even what we were looking at at the time.
Even if all these hardware and software hurdles could be overcome to create a device capable of real time HD video recording and data analysis, socio-psychological and legal problems would inevitably crop up in our society in which privacy is a sacred value. Not to mention what what would happen if some unauthorized person somehow gained access to your life's database!
Actually implementing the idea described above is still at least 10-20 years away, but, as described below, partial implementation is already underway and might bring actual benefits to all of us in the near future.
In early July 2006, Freescale Semiconductor announced the first commercial availability of a new type of memory with the potential to surpass most existing types in terms of speed, power consumption, and durability. This article reviews the advantages of MRAM and its future potential.
With the continual release in recent years of new types of computer memory (RAM, ROM, DRAM, Flash, SRAM, PRAM…), the memory chip industry has become an ever more bewildering world. Freescale's MRAM, one of the latest to be commercially unveiled, improves on and combines the advantages of two types of conventional memory.
The various types of computer memory can be classified in several different ways, the simplest of which is the division into volatile and non-volatile memory. Volatile memory requires constant power to maintain stored information. Most types of RAM (random access memory), the most common type of memory used by modern computers, are volatile. Thus, to store information, conventional RAM computer chips are dependent upon electricity flowing through them. When the power is switched off (i.e., when the system is "powered down"), unless the information has been copied to the hard drive, the information is lost.
Non-volatile memory, on the other hand, can retain stored information permanently, absolving the need for a constant power supply. ROM (read-only memory), which stores information that does not require frequent changing (i.e., doesn't need rewriting), such as Firmware (a software embedded inside a hardware such as a BIOS [basic input-output system]), is typically non-volatile. So, even when the system is off, the data is stored.
Modern types of ROM such as Flash, used in thumb drives and many MP3 players, are also non-volatile, but easily rewritable, making them more like RAM. This combination of qualities has made Flash memory highly popular in recent years and it is currently used to improve other types of storage such as hard drives or even replace them altogether. But although Flash is cheap and non-volatile, it still suffers from a relatively limited lifetime. Though this had improved considerably in the last few years, more importantly, Flash still has a much lower write speed than RAM.
In an attempt to combine the speed of the faster volatile memory with the benefits of non-volatile memory, Freescale (which originated from Motorola Semiconductor about two years ago) created a new type of non-volatile memory - Magnetoresistive Random Access Memory, or MRAM. The roots of MRAM can be traced back to the 1940's at Harvard when physicists An Wang and Way-Dong Woo and later Jay Forrester and colleagues at MIT worked on developments that led to Magnetic Core Memory and later on to the discovery of the "giant magnetoresistive effect" in thin-film structures by researchers from IBM in the late 1980's. Like Flash, MRAM retains data after a power supply is cut off, potentially eliminating that seemingly endless boot time of conventional computers when data from the hard drive is transferred to RAM, as well as loss of data when the computer is suddenly shut off. MRAM has much faster write speeds than Flash and has an unlimited endurance, meaning that MRAM is not subject to the degradation suffered by Flash.
Conventional RAM memory is made of transistors and capacitors that are paired to create a memory cell, which represents one bit of data (0 or 1). Memory cells are aligned in columns and rows, the intersections of which are known as addresses in which information is stored. Reading and writing information occurs by measuring or changing the charge at a specific address, accordingly.
MRAM works in a different way, more like the read/write head of a hard drive. But unlike a hard drive, which includes mechanical parts (the moving arm holding the read/write head and the rotating plates on which the information is stored), MRAM is a solid state device and, as such, has much greater speed and durability. Like conventional RAM, MRAM is composed of transistors but, instead of electrical charges, it uses magnetic charges to store information. An MRAM chip is made up of millions of pairs of tiny ferromagnetic plates (like the one covering hard drives) called memory cells, i.e., magnetic sandwiches consisting of two magnetic layers separated by a very thin insulating layer. Each magnetic layer has a polarity – a north pole and a south pole. These can be oriented in a parallel orientation, meaning that both have their respective poles (or 'magnetic moments') in the same orientation, or in an anti-parallel fashion, meaning that their poles/magnetic moments are oriented in opposite directions. These relative magnetic pole orientations correspond to the binary memory states, either 0 or 1.
An MRAM chip reads information by measuring the electrical resistance of a specific cell that, in turn, depends upon the alignment of the magnetic moments of the layers of the cell. To read a bit of information, a current is passed through the memory cell. If the magnetic moments are in a parallel orientation, then the detected resistance would be smaller than if they were in an anti-parallel orientation.
Write is achieved by the alignment of the magnetic moments of the two memory layers into one or the other relative orientation. Current is passed through two sets of parallel wires or write lines (called a bit line and a digit or word line), which pass over and beneath the memory cells, respectively.The bit lines and the digit lines run perpendicular to one another and at their intersections lie the magnetic memory cells, each defined by one particular bit line and one particular digit line. To write to a particular memory cell (bit), current is passed through the two wires that intersect at that memory cell. The magnetic field that is generated from current passing through a wire can change the orientation of the magnetic moments of the particular memory cell.
Although MRAM has many advantages over virtually every existing memory type, it is still in its infancy. Many had hoped MRAM would usher in the age of instant-on computers able to replace the computer main memory and hard drives, but, due mainly to its cost, these hopes remain a dream. At $25 per 0.5 MB, MRAM has no chance of competing with existing RAM selling for $25 per 256 MB, not to mention Flash, which sells for $25 per 1 GB. The only place where MRAM might be widely utilized is in specialized markets, for example, as a Battery-Backed SRAM replacement. Only when it breaks its current high price per MB ratio (possibly with more advanced lithographic technology than its current 0.18 microns) will MRAM's unique qualities find widespread usage.
Researchers from the Netherlands and Japan have succeeded in flipping the value of a magnetic memory bit without any external magnetic field interference. Instead, they flashed a very short pulse of circularly-polarized laser light at it. This method is about 50,000 times faster than those used in other magneto-optic data storage systems, which means that the technology can enable the development of ultra-fast all-optical magnetic hard disk drives.
A binary bit in magnetic hard disk drives is represented by the direction – “up” or “down” - of the magnetic moments in a small region of the disk. Theo Rasing and colleagues at the Radboud University Nijmegen in the Netherlands, together with researchers at Nihon University in Japan, demonstrated their method by flipping the magnetization of a 5 micron-wide spot on a thin magnetic film from up to down and vice versa. While some commercial hard-drives are already using light to read from magnetic bits, this is the first time laser light was used to write data as well. Using only laser light can significantly decrease cost and complexity of future hard drives.
However, the physicists are facing several problems with the development of the technology. The recording area - 5 microns wide – is much larger than that of today’s hard drives’ (less than half a micron). This means users will have substantial speed increase at a price of decreased disk capacity. Julius Hohfeld of the Seagate Research in Pittsburgh said that another problem is the need to have an affordable laser that can fire 40-femtosecond (40 billionth of a millionth of a second) duration pulses. The produced circularly-polarized light should be intense and focused on a 50 nm spot in diameter – smaller than the wavelength of the laser light. Rasing, who had patented the write process, said that these are all solvable problems.
Until now, the physicists worked with materials that are widely used in magneto-optic data storage devices, and did their experiments on a mixture of gadolinium, iron and cobalt. They are now focusing on testing other materials with higher coercivity that could preserve the same storage density as the conventional hard drive.
Future Laser-based hard drives may compete with advanced Solid State hard drives (SSDs), which also offer better performance and reliability. Still, as TFOT mentioned in its comprehensive solid state hard drive coverage, SSDs are still fairly expensive, have relatively small capacity and poor write performance compared to existing hard drives. Despite these setbacks, by the time laser based hard drive technology will mature, most of these issues will probably be resolved.
More information about the technology can be found on Science magazine (requires subscription).
The IEEE has formally approved and published the future Wi-Fi standard: 802.11r, also called Fast Basic Service Set Transition. This standard was in development for four years and unravels performance challenges related to VoIP over Wi-Fi implemented in large-scale networks. This would allow Wi-Fi devices to roam rapidly between access points, enhancing the operation of VoIP on enterprise LANs.
The original IEEE 802.11 standards were fashioned with single access points (APs), but that is not the case in offices, where multiple APs are required. In this new standard, devices are designed to jump from one AP to another very swiftly compared to the earlier standard. 802.11r minimizes handoff delays linked with 802.1X authentication by reducing the time taken to re-establish connectivity after a client moves between 802.11 APs while roaming.
The 802.11r have included typical QoS mechanisms, such as packet prioritization and call admission control (CAC), to enhance the operation of real-time voice applications. Using three MAC-layer enhancements, the standard was able to lower the handoff time, but at the same time maintains high levels of security.
The first of the three enhancements was the elimination of the 802.1X key exchange because it was not required during handoffs between APs within the same “mobility domain.” A mobility domain is a set of APs built to execute fast transitions between them.
The second improvement was the addition of a four-way handshake. This was essential for session key establishment and was also integrated in the previously active 802.11 authentication/association messages. This reduced the delay after re-association pending the completion of the security negotiation and allowed data transmissions to resume faster. The final enhancement packages all call resource requests into new authentication messages exchanged before the re-association.
Until recently, vendors have implemented lower security alternatives such as Wired Equivalent Privacy (WEP) encryption on their Wi-Fi VoIP networks. They have also placed VoIP traffic on different Virtual LANs (VLANs) to keep the rest of the network protected. Vendors such as Meru and Extricom have built networks with no roaming, as all their APs are placed on the same channel.
The Wi-Fi Alliance released a new VoIP brand known as Wi-Fi Certified Voice-Personal in June but has had limited success. The Alliance is looking forward to coming up with a new Voice-Enterprise brand, which will include the 802.11r standard, in 2009. “[Voice-Personal certification] is for low range stuff and SME equipment,” said Alistair Mutch, who is the development director for Wi-Fi switch vendor, Trapeze. “We have not submitted to the low end one as we felt it was really not worth it.”. IEEE 802.11r could open up a bottle-neck in enterprise Wi-Fi VoIP installations and should allow VoIP certification to move ahead.
Additional information on IEEE 802.11r standard can be obtained on the IEEE website.
SanDisk recently unveiled the flash based Vaulter Disk - a device that enables faster launching and loading of software on laptops and on personal computers. SanDisk envisions the Vaulter as a unique data storage solution, a compromise between the full and expensive solid state drives (SSDs) and the conventional hard drives, which are considered by many experts to be the bottleneck of modern computing. It is yet to be seen whether the Vaulter will gain popularity, something which did not happen for similar devices, such as the Samsung hybrid hard drive (a device combining flash memory and a mechanical disk).
Every computer user knows that waiting for the computer's OS to boot can be exhausting, whether it is Microsoft Windows, Linux, or any other operating system. The same applies to many modern applications. Mechanical hard drives have been the main bottleneck in modern computing since their introduction into the PC market in the 1980's. SanDisk hopes to achieve better performance by combining a regular hard drive (for the less-accessed files) and a smaller and faster flash-based Vaulter Disk.
SanDisk plans to launch the Vaulter Disk at CES 2008, which will take place in Las Vegas in early January 2008. The device will initially be offered to original equipment manufacturers (OEMs) in capacities ranging from 8GBto 16GB (no price was released).
The Vaulter Disk is available in a flash-based, PCI Express module, which can be added to a laptop or desktop PC. The flash technology gives the Vaulter a number of unique advantages over conventional hard drives; The lack of mechanical parts makes it much more resistant to shock and enables it to consume far less power and release significantly less heat in the process. The device is also incredibly quiet and responds much more quickly than a mechanical computer drive. However, flash technology has its drawbacks – first and foremost – the high price. Therefore, SanDisk has chosen to implement flash technology in a secondary storage device, hosting the computer’s operating system and optional selected applications frequently used by the user, enabling faster access time. The two drives operate in parallel, increasing the overall speed and performance of the PC.
SanDisk’s Vaulter Disk accelerates performance by pre-controlling the distribution of data storage between itself and a high-capacity hard drive, increasing overall user responsiveness. “SanDisk Vaulter Disk consistently boosts user responsiveness by taking advantage of the best native characteristics of a flash-based module and a hard drive,” said Tavi Salomon, Vaulter Product Manager at SanDisk
During the recent Intel Developer Forum (IDF) in San-Francisco, Intel announced that together with its partners it is forming a "promoter group" to push for the creation of a new generation of the most popular connection standard ever created – the Universal Serial Bus (USB). The new generation USB will be ten times faster than current generation and will be capable of transferring large files (25GB and more) quickly and simply.
The new USB 3.0 Promoter Group includes many of the computer industry leaders including Microsoft, HP, NEC, Texas Instruments and others. The new technology will be a backward-compatible standard with the same plug and play capabilities of previous USB technologies and according to Intel, will keep the same ease-of-use that has made the USB so popular until now. Other expectations for the new standard include optimization for low power consumption and improved protocol efficiency. USB 3.0 will be designed to take advantage of future optical capabilities, enabling transfer rates of up to 4.8 Gbps.
USB is also starting to evolve into a wireless protocol (called WUSB) and companies such as Artimi are already using the new technology to create wireless digital cameras that will allow users to download their pictures directly to the computer without using cables. The short range wireless market is currently dominated by the Bluetooth standard, while several other complementary standards such as ZigBee and Wibree have also started to make their mark. Because WUSB allows for much higher transfer rates, it has applications in transferring video and other demanding wireless tasks. It is reasonable to assume that USB 3.0 will also include some sort of wireless version. However, enabling wireless transfer at 4.8Gbps will not be a simple task, especially if low power consumption is a priority (although it may not be completely impossible, as researchers at the Georgia Institute of Technology recently demonstrated when they wirelessly transferred 5 Gbps over a distance of five meters).
According to Intel, the new USB 3.0 specification is expected to arrive to the market by the first half of 2008. Full scale adoption of the new technology will probably take more time. Intel's original press release on the USB 3.0 can be found here
For years Ortal Alpert tried to stay ahead of the game buying the latest hard drives and optical drives to store his ever growing library of data. In the mid 1990s, Alpert came up with a novel idea for storing data, and he decided to start his own company. Almost ten years later, Alpert’s dream lead to the creation of a new optical technology, one with the potential to hold 20 times more data than the best existing optical technology.
In a DVD or HD optical media, there are either one or two layers of data. Adding more layers using existing technology would be expensive — but more importantly, it would have to get around a very basic problem: it’s difficult to read information embedded deep inside this kind of media. The current semireflective layers used to store data on CD/DVD/HD-DVD/BD reduce the amount of light that reaches the deep layers, making the amount of signal reflected from each layer smaller, after a few layers the amount of light reflected becomes so small and so noisy that reading the data becomes nearly impossible.
Overcoming this basic limitation of existing optical media is the goal Mempile set for itself, and the way to achieve it is by completely changing that way optical media works — starting from the material of which it is made. Mempile developed a special variant of the polymer polymethyl methacrylate (PMMA) known as ePMMA. After several years of trial and error, Mempile was able to develop this unique polymer, which it claims is almost entirely transparent to the specific wavelength of the laser used by its recorder/player. The yellowish color of the media is thus not a publicity stunt but the result of the special properties of the material used by Mempile.
Using ePMMA, Mempile was able to create a media with about 200 virtual (i.e., created by the laser) layers, five microns apart, each containing approximately 5 GB of data. Although current prototypes are still in the 600–800GB per media range, Mempile is convinced that further optimization will enable it to reach its goal of 1 TB per 1.2mm disc in the very near future.
But using specially designed polymers is just half the story. In order to make a media which could actually store all this data and effectively retrieve it, the old method of reading and writing on optical media had to be abandoned. Instead of the pits and flat surfaces representing zeros and ones, Mempile chose to implement a photochemical process, which happens when an ePMMA molecule is precisely illuminated by a red laser of a specific a wavelength.
Mempile disc in the lab
In order to be able to precisely illuminate a specific molecule inside the disc, Mempile uses what is known as nonlinear optics. In linear optics the amount of light which is absorbed by an object is directly proportional to the amount of light used, in nonlinear optics the amount of light absorbed does not stand in direct proportion to the amount used — instead, a small decrease in the amount of light used will result in a dramatic decrease in the amount of light absorbed. The process that Mempile uses to write and read data is called two-photon absorption and is nonlinear in nature. When the laser beam is focused to a small radius on the disc, it is very easy for the photons to excite the ePMMA molecules (chromophores), but when the radius of the beam increases even slightly, it becomes very improbable for two photons to be absorbed by a chromophore, so no writing or reading can occur. Nonlinear optics is required in this case because in a 200-layer disc, linear optics would cause some of the light to be absorbed by the layers above the intended one resulting in errors and loss of signal.
In order to read data Mempile uses laser at a specific power which excites the chromophore in a particular layer of the disc. In order to record data, a stronger light is used which creates a different chemical reaction in the molecule. Mempile told that its technology can also be adapted to perform RW in the future, but market demand for such a product does not seem to be huge.
According to Mempile their product should be very reliable, and different simulations and acceleration tests showed data lifetime of about 50 years. Although Mempile is currently planning to launch their first product using red laser (which is a more mature technology), moving to blue laser further down the road will possibly allow the technology to achieve up to 5 TB of data per disc.
There are currently several other companies developing next-generation optical storage technologies. TDK recently announced a 200GB Blu-ray disc, which seems to be getting closer to the limit of Blu-ray media technology. A different path was taken by InPhase, which covered in 2006. InPhase uses holographic technology to record data on a special media currently containing about 300 GB. InPhase is working on increasing the capacity of its media and hopes to reach 1.6 TB by early next decade. The current main market for InPhase’s technology is professional users who are willing to pay extra for a fast and large backup storage system. Mempile is looking toward both the professional market and the consumer market and hopes to launch its first product early in the next decade.
Although this might seem like a long time to wait, there are some good reasons behind this decision. Besides the fact that Mempile developed an entirely new technology which is inherently different than that used by conventional CD/DVD/HD media, and hence bound to take longer to develop, the current market doesn’t seem ripe for such a revolution. In a time when 25/50GB media are still just a small percentage of the consumer market, bringing in 1 TB media doesn’t make sense from the point of view of most manufacturers. For that reason we shall probably see Mempile’s technology on the market just after HD media becomes mainstream.
However, when this transformation occurs, we shall reach a whole new stage in data storage. The invention of the CD-ROM made the question of storing documents (and to some extent images) irrelevant, as one disc could store more documents than most people write in their entire lifetime. The DVD allowed for the first time saving full movies (without the need for excessive compression). Only with the recent introduction of HD media did it become possible for higher-resolution movies to be saved on one disc. When Mempile’s technology reaches the market, it will make storing all major data types irrelevant. A single TeraDisc will be able to store over 250,000 high resolution, high quality pictures or MP3s, over 115 DVD-quality movies, and about 40 HD movies — not to mention an unimaginable number of documents. Mempile also sees its technology being used as a network-based backup technology, allowing users to save data from a variety of devices, including desktops, laptops, and digital video recorders (DVRs).
Although many people find it hard to imagine the need for such space on a single disc, it is not inconceivable that by the time Mempile’s technology reaches the market, even higher-resolution video formats will start to appear, requiring hundreds of Gigabytes per hour, on entirely new display technologies, such as holographic displays, which could require even more storage space
Toshiba Corporation has recently announced two major enhancements to its line-up of NAND-flash-based Solid State Drives (SSDs). One is the addition of a huge 256-gigabyte SSD and the second is the launch of a series of small-sized Flash Modules for netbook computers and ultra-mobile PCs (UMPCs).
Toshiba’s new high density SSD brings 256GB of capacity to the market and a high read-write speeds in the form of a 2.5-inch drive which uses a relatively cheaper Multi-Level Cell (MLC) controller.
Flash memory stores data in individual memory cells, which are made of floating-gate transistors. Traditionally, one bit of data was stored in each cell in so-called single-level cells (SLC). The advantage of this method is faster transfer speeds, lower power consumption, and higher cell endurance. However, since less data is stored per cell, it costs more per megabyte of storage to manufacture. Unlike SLC flash, multi-level cell (MLC) flash memory stores three or more bits in each cell, with the "multi-level" referring to the multiple levels of electrical charge used to store multiple bits per cell. By storing more bits per cell, multi-level cell memory can achieve lower manufacturing costs; however, this technology typically has slower transfer speeds, higher power consumption, and lower cell endurance than single-level cell memory. Due to these differences, MLC flash technology was mostly used in standard memory cards, while SLC flash technology was used in high-performance SSDs.
The new 256GB SSD mounts NAND flash memory on a 70.6mm x 53.6mm x 3.0mm platform. The drive's performance is suitable for the common notebook PCs (i.e. we are not talking Intel new SSD speeds here); according to Toshiba, it is highly reliable, and its high density data storage supports fast data transfer rates. Its specifications claim a maximum read speed of 120MB a second and maximum write speed of 70MB a second (about half that of Intel’s new drive) via a SATA2 3.0Gb/s interface.
Toshiba also released new Flash Modules for the growing market of small, stripped down netbook computers (such as the Asus Eee). Now, Toshiba's small SSDs with densities of 8GB, 16GB, and 32GB are fabricated on a 50mm x 30mm platform, and according to Toshiba the maximum read and write speeds are 80MB and 50MB a second, respectively. Furthermore, they are compatible with SATA interface and will support the continued development of netbook PCs, UMPCs, and mobile and peripheral applications by offering developers a wider and more varied range of SSD technology.
Toshiba's new drives were showcased at the CEATEC in Makuhari, Japan, between September 30 and October 4. Both the 256GB SSD and the Flash Modules are already available for sale, and mass production is expected during the fourth quarter of 2008 (October to December).