Thursday, 17 July 2014

Intel @ CES
Intel’s mobile division has lost an astonishing $2 billion so far this year 
Intel’s Q2 results are excellent and the company’s profits much improved — but its mobile divisions continue bleeding money. When it called this contra-revenue, it wasn’t kidding. 
Steve Jobs, holding aloft an IBM PC
Apple and IBM team up to conquer the enterprise market, and crush Microsoft, Blackberry, and Android 
Apple and IBM are pairing up in a major enterprise play to merge their respective hardware and software solutions. This is a huge move with potential implications for the entire enterprise segment. 
Dell XPS 8700 Core i7 Haswell desktop PC - side open and rear views
ET deals: Dell XPS 8700 desktop with GeForce 
This desktop packs the new Core i7-4790 quad-core processor (up to 4GHz), a CPU that will be able to fly through pretty much whatever tasks you throw at it. Not only that, but there’s also a dedicated NVIDIA GeForce GT 720 graphics card to give your gaming a boost. Though the graphics card isn’t quite as high end as the brand new processor, it’s still a powerful card, and this combo will ensure that you’ll be able to use this computer for a wide variety of intensive and everyday tasks.
Crowdfunded satellite rescue mission becoming frantic as time runs thin 
The public has raised more than to save a long-retired satellite called ISEE-3 — but a deadline is fast approaching, and the team might not be ready in time.
Microsoft's used car salesman approach to selling Windows 8
Microsoft slashes prices to compete with Chromebooks: The second coming of the netbook 
The behemoth flails wildly. At its Worldwide Partner Conference, Microsoft has finally decided to compete with Chromebooks at the very lowest end of the PC market. Come fall, you’ll be able to get your hands on an HP Stream laptop running  a Windows price point that we haven’t seen since the last time the PC market scraped the barrel (netbooks). With Chromebooks quickly gobbling up market share, and Windows 8 and Windows Phone 8 failing to gain a significant foothold, Microsoft has clearly decided it’s time to resort to desperate measures.
DNA Prison
The UK will sequence 100,000 genomes to better understand cancer & other diseases 
In a recent filing with the Securities and Exchange Commission (SEC), genetics company Illumina revealed that it is partnering with the UK government to sequence 100,000 genomes by 2017. This initiative is focused on gathering data on patients with cancer and rare diseases, and will help researchers better understand the genetic component of human illnesses. While the final budget hasn’t been settled on quite yet, it could end up costing UK taxpayers upwards of $200,000,000 just to sequence the patients’ DNA.
Six of the reflectors that make up the JWST's primary mirror
We’ll find alien life in the next 20 years with our new, awesome telescopes says NASA 
In a public meeting with NASA’s chief, the agency’s top scientists have said that they expect to find alien life within the next 20 years. Unfortunately, for those hoping that Europa or Mars might harbor life, NASA is fairly confident that the discovery of extraterrestrials will probably be outside our Solar System rather than within it. But still, suffice it to say, the discovery of life of any kind outside of Earth’s atmosphere would be massive news. Within 20 years, we could finally find out that we’re not alone in the universe — and, well, that would change everything.
Department of Justice, DOJ, seal cropped
US government asserts unilateral right to access private data, even if it’s stored outside the US 

The US government is attempting to pry data out of Microsoft regarding a foreign national by claiming that it has a unilateral right to access information because Microsoft is a US company. Will such tactics fly in the post-Snowden era?
Microsoft Project Adam, neural network illustration
Microsoft wants to be part of Judgment Day, too: Introducing the Project Adam artificial intelligence 
Microsoft has unveiled Project Adam, its new artificial intelligence that it claims is 50 times faster than comparable state-of-the-art systems deployed by the likes of Google. Adam can look at an image of almost anything and tell you exactly what it is; it can even differentiate between a Pembroke and Cardigan corgi. Notably, while similar AIs are moving to massively parallel GPU computing, Adam uses plain old CPUs in Microsoft’s Azure cloud — an impressive feat that is only possible thanks to Microsoft’s use of lock-free Hogwild! computing.
Raspberry Pi 2
Raspberry Pi 2 targeted for 2017, current model gets much-needed upgrade to Model B+ 
The little computer that could, the Raspberry Pi, has been due for a hardware upgrade for quit some time

Google Glass:

Augmented Reality has already gotten into our life in the forms of simulated experiment and education app, but Google is taking it several steps higher with Google Glass. Theoretically, with Google Glass, you are able to view social media feeds, text, Google Maps, as well as navigate with GPS and take photos. You will also get the latest updates while you are on the ground.
google glass
It’s truly what we called vision, and it’s absolutely possible given the fact that the Google’s co-founder, Sergey Brin has demo’ed the glass with skydivers and creatives. Currently the device is only available to some developers with the price tag of $1500, but expect other tech companies trying it out and building an affordable consumer version.

Form 1:

Just as the term suggests, 3D printing is the technology that could forge your digital design into a solid real-life product. It’s nothing new for the advanced mechanical industry, but a personal 3D printer is definitely a revolutionary idea.
Everybody can create their own physical product based on their custom design, and no approval needed from any giant manufacturer! Even the James Bond’s Aston Martin which was crashed in the movie was a 3D printed product!
form 1
Form 1 is one such personal 3D printer which can be yours at  It may sound like a high price but to have the luxury of getting producing your own prototypes, that’s a reaonable price.
Imagine a future where every individual professional has the capability to mass produce their own creative physical products without limitation. This is the future where personal productivity and creativity are maximized.

 Oculus Rift:

Virtual Reality gaming is here in the form of Oculus Rift. This history-defining 3D headset lets you mentally feel that you are actually inside a video game. In the Rift’s virtual world, you could turn your head around with ultra-low latency to view the world in high resolution display.
There are premium products in the market that can do the same, but Rift wants you to enjoy the experience at only $300, and the package even comes as a development kit. This is the beginning of the revolution for next-generation gaming.
oculus rift
The timing is perfect as the world is currently bombarded with the virtual reality topic that could also be attributed to Sword Art Online, the anime series featuring the characters playing games in an entirely virtual world. While we’re getting there, it could take a few more years to reach that level of realism. Oculus Rift is our first step.

Leap Motion:

Multi-touch desktop is a (miserably) failed product due to the fact that hands could get very tired with prolonged use, but Leap Motion wants to challenge this dark area again with a more advanced idea. It lets you control the desktop with fingers, but without touching the screen.
leap motion
It’s not your typical motion sensor, as Leap Motion allows you to scroll the web page, zoom in the map and photos, sign documentss and even play a first person shooter game with only hand and finger movements. The smooth reaction is the most crucial key point here. More importantly, If this device could completely work with Oculus Rift to simulate a real-time gaming experience, gaming is going to get a major make-over.

 Eye Tribe:

Eye tracking has been actively discussed by technology enthusiasts throughout these years, but it’s really challenging to implement. But Eye Tribe actually did this. They successfully created the technology to allow you to control your tablet, play flight simulator, and even slice fruits in Fruit Ninja only with your eye movements.
eye tribe
It’s basically taking the common eye-tracking technology and combining it with a front-facing camera plus some serious computer-vision algorithm, and voila, fruit slicing done with the eyes! A live demo was done in LeWeb this year and we may actually be able to see it in in action in mobile devices in 2013.
Currently the company is still seeking partnership to bring this sci-fi tech into the consumer market but you and I know that this product is simply too awesome to fail.


The current problem that most devices have is that they function as a standalone being, and it require effort for tech competitors to actually partner with each other and build products that can truly connect with each other. SmartThings is here to make your every device, digital or non-digital, connect together and benefit you.
With SmartThings you can get your smoke alarms, humidity, pressure and vibration sensors to detect changes in your house and alert you through your smartphone! Imagine the possibilities with this.
You could track who’s been inside your house, turn on the lights while you’re entering a room, shut windows and doors when you leave the house, all with the help of something that only costs $500! Feel like a tech lord in your castle with this marvel.
 Firefox OS:
iOS and Android are great, but they each have their own rules and policies that certainly inhibit the creative efforts of developers. Mozilla has since decided to build a new mobile operating system from scratch, one that will focus on true openness, freedom and user choice. It’s Firefox OS.
Firefox OS is built on Gonk, Gecko and Gaia software layers – for the rest of us, it means it is built on open source, and it carries web technologies such as HTML5 and CSS3.
firefox os
Developers can create and debut web apps without the blockade of requirements set by app stores, and users could even customize the OS based on their needs. Currently the OS has made its debut on Android-compatible phones, and the impression so far, is great.
You can use the OS to do essential tasks you do on iOS or Android: calling friends, browsing web, taking photos, playing games, they are all possible on Firefox OS, set to rock the smartphone market.

Project Fiona:

Meet the first generation of the gaming tablet. Razer’s Project Fiona is a serious gaming tablet built for hardcore gaming. Once it’s out, it will be the frontier for the future tablets, as tech companies might want to build their own tablets, dedicated towards gaming, but for now Fiona is the only possible one that will debut in 2013.
project fiona
This beast features next generation Intel® Core i7 processor geared to render all your favorite PC games, all at the palm of your hands. Crowned as the best gaming accessories manufacturer, Razer clearly knows how to build user experience straight into the tablet, and that means 3-axis gyro, magnetometer, accelerometer and full-screen user interface supporting multi-touch. My body and soul are ready.


Parallella is going to change the way that computers are made, and Adapteva offers you chance to join in on this revolution. Simply put, it’s a supercomputer for everyone. Basically, an energy-efficient computer built for processing complex software simultaneously and effectively. Real-time object tracking, holographic heads-up display, speech recognition will become even stronger and smarter with Parallella.
The project has been successfully funded so far, with an estimated delivery date of February 2013. For a mini supercomputer, the price seems really promising since it’s magically $99! It’s not recommended for the non-programmer and non-Linux user, but the kit is loaded with development software to create your personal projects.
I never thought the future of computing could be kick-started with just $99, which is made possible using crowdfunding platforms.

Google Driverless Car:

I could still remember the day I watch the iRobot as a teen, and being skeptical about my brother’s statement that one day, the driverless car will become reality. And it’s now a reality, made possible by… a search engine company, Google.
While the data source is still a secret recipe, the Google driverless car is powered by artificial intelligence that utilizes the input from the video cameras inside the car, a sensor on the vehicle’s top, and some radar and position sensors attached to different positions of the car. Sounds like a lot of effort to mimic the human intelligence in a car, but so far the system has successfully driven 1609 kilometres without human commands!
google driverless car
Use Any Phone on Any Wireless Network:
The reason most cell phones are so cheap is that wireless carriers subsidize them so you'll sign a long-term contract. Open access could change the economics of the mobile phone (and mobile data) business dramatically as the walls preventing certain devices from working on certain networks come down. We could also see a rapid proliferation of cell phone models, with smaller companies becoming better able to make headway into formerly closed phone markets.

What is it? Two years is an eternity in the cellular world. The original iPhone was announced, introduced, and discontinued in less than that time, yet carriers routinely ask you to sign up for two-year contracts if you want access to their discounted phones. (It could be worse--in other countries, three years is normal.) Verizon launched the first volley late last year when it promised that "any device, any application" would soon be allowed on its famously closed network. Meanwhile, AT&T and T-Mobile like to note that their GSM networks have long been "open."
When is it coming? Open access is partially here: You can use almost any unlocked GSM handset on AT&T or T-Mobile today, and Verizon Wireless began certifying third-party devices for its network in July (though to date the company has approved only two products). But the future isn't quite so rosy, as Verizon is dragging its feet a bit on the legal requirement that it keep its newly acquired 700-MHz network open to other devices, a mandate that the FCC agreed to after substantial lobbying by Google. Some experts have argued that the FCC provisions aren't wholly enforceable. However, we won't really know how "open" is defined until the new network begins rolling out, a debut slated for 2010. 

Your Fingers Do Even More Walking:
Log in to your airline's Web site. Check in. Print out your boarding pass. Hope you don't lose it. Hand the crumpled pass to a TSA security agent and pray you don't get pulled aside for a pat-down search. When you're ready to fly home, wait in line at the airport because you lacked access to a printer in your hotel room. Can't we come up with a better way?
Last year Microsoft introduced Surface, a table with a built-in monitor and touch screen; many industry watchers have seen it as a bellwether for touch-sensitive computing embedded into every device imaginable. Surface is a neat trick, but the reality of touch devices may be driven by something entirely different and more accessible: the Apple iPhone. 

What is it? With the iPhone, "multitouch" technology (which lets you use more than one finger to perform specific actions) reinvented what we knew about the humble touchpad. Tracing a single finger on most touchpads looks positively simian next to some of the tricks you can do with two or more digits. Since the iPhone's launch, multitouch has found its way into numerous mainstream devices, including the Asus Eee PC 900 and a Dell Latitude tablet PC. Now all eyes are turned back to Apple, to see how it will further adapt multitouch (which it has already brought to its laptops' touchpads). Patents that Apple has filed for a multitouch tablet PC have many people expecting the company to dive into this neglected market, finally bringing tablets into the mainstream and possibly sparking explosive growth in the category. 

When is it coming? It's not a question of when Multitouch will arrive, but how quickly the trend will grow. Fewer than 200,000 touch-screen devices were shipped in 2006. iSuppli analysts have estimated that a whopping 833 million will be sold in 2013. The real guessing game is figuring out when the old "single-touch" pads become obsolete, possibly taking physical keyboards along with them in many devices. 

Cell Phones Are the New Paper:

Next Year, you can drop paper boarding passes and event tickets and just flash your phone at the gate.
What is it? The idea of the paperless office has been with us since Bill Gates was in short pants, but no matter how sophisticated your OS or your use of digital files in lieu of printouts might be, they're of no help once you leave your desk. People need printouts of maps, receipts, and instructions when a computer just isn't convenient. PDAs failed to fill that need, so coming to the rescue are their replacements: cell phones.
Applications to eliminate the need for a printout in nearly any situation are flooding the market. Cellfire offers mobile coupons you can pull up on your phone and show to a clerk; now makes digital concert passes available via cell phone through its Tickets@Phone service. The final frontier, though, remains the airline boarding pass, which has resisted this next paperless step since the advent of Web-based check-in.
When is it coming? Some cell-phone apps that replace paper are here now (just look at the ones for the iPhone), and even paperless boarding passes are creeping forward. Continental has been experimenting with a cell-phone check-in system that lets you show an encrypted, 2D bar code on your phone to a TSA agent in lieu of a paper boarding pass. The agent scans the bar code with an ordinary scanner, and you're on your way.
Gesture-Based Remote Control:

Soon you'll be able to simply point at your television and control it with hand gestures.
Soon you'll be able to simply point at your television and control it with hand gestures.
We love our mice, really we do. Sometimes, however, such as when we're sitting on the couch watching a DVD on a laptop, or when we're working across the room from an MP3-playing PC, it just isn't convenient to drag a hockey puck and click on what we want. Attempts to replace the venerable mouse--whether with voice recognition or brain-wave scanners--have invariably failed. But an alternative is emerging.  
What is it? Compared with the intricacies of voice recognition, gesture recognition is a fairly simple idea that is only now making its way into consumer electronics. The idea is to employ a camera (such as a laptop's Webcam) to watch the user and react to the person's hand signals. Holding your palm out flat would indicate "stop," for example, if you're playing a movie or a song. And waving a fist around in the air could double as a pointing system: You would just move your fist to the right to move the pointer right, and so on.
When is it coming? Gesture recognition systems are creeping onto the market now. Toshiba, a pioneer in this market, has at least one product out that supports an early version of the technology: the Qosmio G55 laptop, which can recognize gestures to control multimedia playback. The company is also experimenting with a TV version of the technology, which would watch for hand signals via a small camera atop the set. Based on my tests, though, the accuracy of these systems still needs a lot of work.
Gesture recognition is a neat way to pause the DVD on your laptop, but it probably remains a way off from being sophisticated enough for broad adoption. All the same, its successful development would excite tons of interest from the "can't find the remote" crowd. Expect to see gesture recognition technology make some great strides over the next few years, with inroads into mainstream markets by 2012.

Radical Simplification Hits the TV Business:
The back of most audiovisual centers looks like a tangle of snakes that even Medusa would turn away from. Similarly, the bowl of remote controls on your coffee table appeals to no one. The Tru2way platform may simplify things once and for all.
What is it? Who can forget CableCard, a technology that was supposed to streamline home A/V installations but that ultimately went nowhere despite immense coverage and hype? CableCard just didn't do enough--and what it managed to do, it didn't do very well.
Tru2way is a set of services and standards designed to pick up the pieces of CableCard's failure by upgrading what that earlier standard could do (including support for two-way communications features like programming guides and pay-per-view, which CableCard TVs couldn't handle), and by offering better compatibility, improved stability, and support for dual-tuner applications right out of the box. So if you have a Tru2way-capable TV, you should need only to plug in a wire to be up and running with a full suite of interactive cable services (including local search features, news feeds, online shopping, and games)--all sans additional boxes, extra remotes, or even a visit from cable-company technicians.
When is it coming? Tru2way sets have been demonstrated all year, and Chicago and Denver will be the first markets with the live technology. Does Tru2way have a real shot? Most of the major cable companies have signed up to implement it, as have numerous TV makers, including LG, Panasonic, Samsung, and Sony. Panasonic began shipping two Tru2way TVs in late October, and Samsung may have sets that use the technology available in early to mid-2009.

Curtains for DRM:

RealDVD's DRM-free format makes taking flicks on the road easier. This is the future of entertainment.
RealDVD's DRM-free format makes taking flicks on the road easier. This is the future of entertainment.
Petrified of piracy, Hollywood has long relied on technical means to keep copies of its output from making the rounds on peer-to-peer networks. It hasn't worked: Tools to bypass DRM on just about any kind of media are readily available, and feature films often hit BitTorrent even before they appear in theaters. Unfortunately for law-abiding citizens, DRM is less a deterrent to piracy than a nuisance that gets in the way of enjoying legally obtained content on more than one device.  
What is it? It's not what it is, it's what it isn't--axing DRM means no more schemes to prevent you from moving audio or video from one form of media to another. The most ardent DRM critics dream of a day when you'll be able to take a DVD, pop it in a computer, and end up with a compressed video file that will play on any device in your arsenal. Better yet, you won't need that DVD at all: You'll be able to pay a few bucks for an unprotected, downloadable version of the movie that you can redownload any time you wish.
When is it coming? Technologically speaking, nothing is stopping companies from scrapping DRM tomorrow. But legally and politically, resistance persists. Music has largely made the transition already-- that you can play on as many devices as you want.
Video is taking baby steps in the same direction, albeit slowly so far. One recent example: RealNetworks' RealDVD software (which is now embroiled in litigation) lets you rip DVDs to your computer with one click, but they're still protected by a DRM system. Meanwhile, studios are experimenting with bundling legally rippable digital copies of their films with packaged DVDs, while online services are tiptoeing into letting downloaders burn a copy of a digital movie to disc.
Memristor: A Groundbreaking New Circuit :

This simple memristor circuit could soon transform all electronic devices.
This simple memristor circuit could soon transform all electronic devices.
Since the dawn of electronics, we've had only three types of circuit components--resistors, inductors, and capacitors. But in 1971, UC Berkeley researcher Leon Chua theorized the possibility of a fourth type of component, one that would be able to measure the flow of electric current: the memristor. Now, just 37 years later, Hewlett-Packard has built one.
What is it? As its name implies, the memristor can "remember" how much current has passed through it. And by alternating the amount of current that passes through it, a memristor can also become a one-element circuit component with unique properties. Most notably, it can save its electronic state even when the current is turned off, making it a great candidate to replace today's flash memory.
Memristors will theoretically be cheaper and far faster than flash memory, and allow far greater memory densities. They could also replace RAM chips as we know them, so that, after you turn off your computer, it will remember exactly what it was doing when you turn it back on, and return to work instantly. This lowering of cost and consolidating of components may lead to affordable, solid-state computers that fit in your pocket and run many times faster than today's PCs.
Someday the memristor could spawn a whole new type of computer, thanks to its ability to remember a range of electrical states rather than the simplistic "on" and "off" states that today's digital processors recognize. By working with a dynamic range of data states in an analog mode, memristor-based computers could be capable of far more complex tasks than just shuttling ones and zeroes around.
When is it coming? Researchers say that no real barrier prevents implementing the memristor in circuitry immediately. But it's up to the business side to push products through to commercial reality. Memristors made to replace flash memory (at a lower cost and lower power consumption) will likely appear first; HP's goal is to offer them by 2012. Beyond that, memristors will likely replace both DRAM and hard disks in the 2014-to-2016 time frame. As for memristor-based analog computers, that step may take 20-plus years. 

32-Core CPUs From Intel and AMD:

If your CPU has only a single core, it's officially a dinosaur. In fact, is now commonplace; you can even get laptop computers with four cores today. But we're really just at the beginning of the core wars: Leadership in the CPU market will soon be decided by who has the most cores, not who has the fastest clock speed.
8-core Intel and AMD CPUs are about to make their way onto desktop PCs everywhere. Next stop: 16 cores.
8-core Intel and AMD CPUs are about to make their way onto desktop PCs everywhere. Next stop: 16 cores.
quad-core computing
What is it? With the gigahertz race largely abandoned, both AMD and Intel are trying to pack more cores onto a die in order to continue to improve processing power and aid with multitasking operations. Miniaturizing chips further will be key to fitting these cores and other components into a limited space. Intel will roll out 32-nanometer processors (down from today's 45nm chips) in 2009.
When is it coming? Intel has been very good about sticking to its road map. A six-core CPU based on the Itanium design should be out imminently, when Intel then shifts focus to a brand-new architecture called Nehalem, to be marketed as Core i7. Core i7 will feature up to eight cores, with eight-core systems available in 2009 or 2010. (And an eight-core AMD project called Montreal is reportedly on tap for 2009.)
After that, the timeline gets fuzzy. Intel reportedly canceled a 32-core project called Keifer, slated for 2010, possibly because of its complexity (the company won't confirm this, though). That many cores requires a new way of dealing with memory; apparently you can't have 32 brains pulling out of one central pool of RAM. But we still expect cores to proliferate when the kinks are ironed out: 16 cores by 2011 or 2012 is plausible (when transistors are predicted to drop again in size to 22nm), with 32 cores 

Nehalem and Swift Chips Spell the End of Stand-Alone Graphics Boards:

When AMD purchased graphics card maker ATI, most industry observers assumed that the combined company would start working on a CPU-GPU fusion. That work is further along than you may think.
What is it? While GPUs get tons of attention, discrete graphics boards are a comparative rarity among PC owners, as 75 percent of laptop users stick with good old integrated graphics, according to Mercury Research. Among the reasons: the extra cost of a discrete graphics card, the hassle of installing one, and its drain on the battery. Putting graphics functions right on the CPU eliminates all three issues.
Chip makers expect the performance of such on-die GPUs to fall somewhere between that of today's integrated graphics and stand-alone graphics boards--but eventually, experts believe, their performance could catch up and make discrete graphics obsolete. One potential idea is to devote, say, 4 cores in a 16-core CPU to graphics processing, which could make for blistering gaming experiences. 

When is it coming? Intel's soon-to-come Nehalem chip includes graphics processing within the chip package, but off of the actual CPU die. AMD's Swift (aka the Shrike platform), the first product in its Fusion line, reportedly takes the same design approach, and is also currently on tap for 2009.
Putting the GPU directly on the same die as the CPU presents challenges--heat being a major one--but that doesn't mean those issues won't be worked out. Intel's two Nehalem follow-ups, Auburndale and Havendale, both slated for late 2009, may be the first chips to put a GPU and a CPU on one die, but the company isn't saying yet. 

USB 3.0 Speeds Up Performance on External Devices:

The USB connector has been one of the greatest success stories in the history of computing, with more than 2 billion USB-connected devices sold to date. But in an age of terabyte hard drives, the once-cool throughput of 480 megabits per second that a USB 2.0 device can realistically provide just doesn't cut it any longer. 

What is it? USB 3.0 (aka "SuperSpeed USB") promises to increase performance by a factor of 10, pushing the theoretical maximum throughput of the connector all the way up to 4.8 gigabits per second, or processing roughly the equivalent of an entire CD-R disc every second. USB 3.0 devices will use a slightly different connector, but USB 3.0 ports are expected to be backward-compatible with current USB plugs, and vice versa. USB 3.0 should also greatly enhance the power efficiency of USB devices, while increasing the juice (nearly one full amp, up from 0.1 amps) available to them. That means faster charging times for your iPod--and probably even more bizarre USB-connected gear like the toy rocket launchers and beverage coolers that have been festooning people's desks.
When is it coming? The USB 3.0 spec is nearly finished, with consumer gear now predicted to come in 2010. Meanwhile, a host of competing high-speed plugs--DisplayPort, eSATA, and HDMI--will soon become commonplace on PCs, driven largely by the onset of high-def video. Even FireWire is looking at an imminent upgrade of up to 3.2 gbps performance. The port proliferation may make for a baffling landscape on the back of a new PC, but you will at least have plenty of high-performance options for hooking up peripherals. 

Wireless Power Transmission:

Wireless power transmission has been a dream since the days when Nikola Tesla imagined a world studded with enormous Tesla coils. But aside from advances in recharging electric toothbrushes, wireless power has so far failed to make significant inroads into consumer-level gear.
" and it works by sending a specific, 10-MHz signal through a coil of wire; a similar, nearby coil of wire resonates in tune with the frequency, causing electrons to flow through that coil too. Though the design is primitive, it can light up a 60-watt bulb with 70 percent efficiency.
When is it coming? Numerous obstacles remain, the first of which is that the Intel project uses alternating current. To charge gadgets, we'd have to see a direct-current version, and the size of the apparatus would have to be considerably smaller. Numerous regulatory hurdles would likely have to be cleared in commercializing such a system, and it would have to be thoroughly vetted for safety concerns.
Assuming those all go reasonably well, such receiving circuitry could be integrated into the back of your laptop screen in roughly the next six to eight years.

Saturday, 17 May 2014

Advanced Nanoscale ULSI Interconnects: Fundamentals and Applications

In Advanced ULSI interconnects – fundamentals and applications we bring a comprehensive description of  copper-based interconnect technology for ultra-largescale integration (ULSI) technology for integrated circuit (IC) application. Integrated circuit technology is the base for all modern electronics systems. You can find electronics systems today everywhere: from toys and home appliances to airplanes and space shuttles. Electronics systems form the hardware that together with software are the bases of the modern information society. The rapid growth and vast exploitation of modern electronics system create a strong demand for new and improved electronic circuits as demonstrated by the amazing progress in the field of ULSI technology. This progress is well described by the famous “Moore’s law” which states, in its most general form, that all the metrics that describe integrated circuit performance (e.g., speed, number of devices, chip area) improve exponentially as a function of time. For example, the number of components per chip doubles every 18 months and the critical dimension on a chip has shrunk by 50% every 2 years on average in the last 30 years. This rapid growth in integrated circuits technology results in highly complex integrated circuits with an increasing number of interconnects on chips and between the chip and its package. The complexity of the interconnect network on chips involves an increasing number of metal lines per interconnect level, more interconnect levels, and at the same time a reduction in the interconnect line critical dimensions. The continuous shrinkage in metal line critical dimension forced the transition from aluminum-based interconnect technology, that was dominant from the early days of modern microelectronics, to copper-based metallization that became the dominant technology in recent years. As interconnect critical dimensions shrank to the nano-scale range (below 100 nm) more aggressive interconnect designs on smaller scale became possible, thus keeping “Moore’s law” on pace. In addition to the introduction of copper as the main conducting material, it was clear that new dielectric materials, with low dielectric constant (“low-k” materials), should replace the conventional silicon dioxide interlevel dielectric (ILD). Thus the overall technology shift is from “aluminum–silicon dioxide” ULSI interconnect technology to “copper-low-k” technology. The Cu-low-k technology allows patterning of 45 nm wide interconnects in mass production and will probably allow further shrinkage in patterning of 15–22 nm lines in the next 10 years. Copper metallization is achieved by electrochemical processing or processes that involve electrochemistry. The metal deposition is done by electrochemical deposition and its top surface is planarized (i.e., made flat or planar in the industry jargon) by chemical mechanical polishing (CMP). Electroplating is an ancient technique for metal deposition. Its application to ULSI technology with nano-scale patterning was a major challenge to scientists and engineers in the last 20 years. The success in the introduction of copper metallization so that it became the leading technology demonstrated the capability and compatibility of electrochemical processing in the nano-scale regime. In this book we will review the basic technologies that are used today for copper metallization for ULSI applications: deposition and planarization. We will describe the materials that are used, their properties, and the way they are all integrated.We will describe the copper integration processes and a mathematical model for the electrochemical processes in the nano-scale regime. We will present the way we characterize and measure the various conducting and insulating thin films that are used to build the copper interconnect multilayer structures using the “damascene” (embedded metallization) process. We will also present various novel nano-scale technologies that will link modern nano-scale electronics to future nanoscale- based systems. Following this preface we bring an introduction where we bring the fundamentals of Cu electroplating for ULSI – when electrochemistry meets electrical engineering. In  we give a historical review describing interconnect technology from the early days of modern microelectronics until today. It describes materials, technology, and process integration overview that brings into perspective the ways metallization is accomplished today. Further understanding of the scaling laws is presented next. Both semiconductor and interconnect progress are described, since they are interwoven into each other. Progress in interconnects always follows progress in transistor science and technologies. Although this book focuses on interconnect technology it should be clear that interconnects link transistors and the overall circuit operation is achieved by combined interaction of a highly complex network. The basic role of interconnects in such networks and how interconnects performance is linked to overall circuit performance are discussed next. One of the key issues in the increasing complex system is whether there are also other paradigms. One such paradigm is the 3D integration of ULSI components, also known as “3D integration.” In  we present a detailed review of interconnect materials. There is no doubt that the advancement in materials science and technology in recent years was the key to the advances in the ULSI technology. There are few groups of materials in ULSI interconnects: conductors (e.g., copper, silicides), barrier layers (e.g., Ta/TaN, TiN, WC), capping layers (dielectrics such as nitride-doped amorphous silicon or silicon nitride or electroless CoWP), and dielectrics with a dielectric constant less than that of silicon dioxide (i.e., low-k materials). We dedicate a special part to the material properties of silicides (metal–silicon compounds) that are used as the conducting interfacing material between the metallic interconnect network and the semiconductor transistors. The following parts bring an intensive review of low-k materials. They pose a major challenge since they should compete with the conventional silicon dioxide that, although its dielectric constant is higher, has excellent electrical and echanical properties and whose process technology is well established and entrenched in the industry and research communities.  In  we focus on the actual electrochemical processes that are used for ULSI interconnect applications. We will first present the copper plating principles and their application to sub-micron patterning. Additives will be described in light of their role in the fully planar embedded metallization technology (i.e., the damascene process). In addition to conventional process we also mention some novel processes. Among them, the atomic layer deposition is the most promising and is under intensive investigation due to its ability to form ultra-thin seed layers with excellent uniformity and step coverage. Other interesting nano-scale processes are the deposition of nano particles, either inorganic or organic, that yield nano-scale metal lines that may, one day, be used for nano-electronics applications. A common approach that links basic modeling to actual structure is the use of computer-aided design (CAD) simulating the desired structure based on the fundamental physical and chemical models of the process. For example, the use of electrochemical deposition onto  narrrow features with critical dimensions below 100 nm and with aspect ratio (i.e., the ratio of height to width) more than 2 to 1 requires a special process that is called “superfilling.” In such a process, the filling of the bottom of the feature is much faster than the deposition on its upper “shoulders.” Rapid deposition and full deposition onto the feature is achieved without defects (e.g., voids, seams) and with relatively thin metal on the shoulders that can be reliably removed in the ensuing chemical mechanical polishing planarization step. The discovery of the “superfilling” process was a major breakthrough in the initial stages of the introduction of copper metallization. In we give a detailed description of such modeling of copper metallization using electrochemical processes for nanoscale metallization. embedded metal process that is known as the damascene process. Following a detailed description of the various damascene concepts and its associated process  steps we discuss the process integration issues. The integration involves linking all the various components: starting at the lithography level, patterning the wafer, deposition of the barrier and seed layers followed by the copper plating and its chemical mechanical polish (CMP) planarization, and ending with capping layer deposition. In this part we focus on the basic roles of each one of the components in the overall integration issue and on the way we put them all together. it describes the basic principles of the tools that are used for the copper metallization. There are two families of tools that we describe here – tools for deposition and tools for chemical mechanical polishing (CMP). Plating tools, both for electroplating and for electroless plating, are described in detail emphasizing their relation to the damascene process as applied for ULSI applications, i.e., material properties and integration in the manufacturing line. Another family of tools is the one used for metrology and inspection.We present in  the innovative and advanced tools that are being used for Cu nanotechnology. One of the most promising tools is the use of X-ray technology, especially  X-ray reflection (XRR), which has proven to be the only method suitable for ultrathin barrier layers and for porous materials that are used for low dielectric constant insulators. Another interesting development in modern planarization technology is the capability for in-line metrology.We present recent innovations in this field using optical metrology that is integrated with chemical mechanical polishing processes. Finally, in we present a full and comprehensive review of the most promising interconnect technologies for future nanotechnology. This part includes a complete review of novel nanotechnologies such as bio-templating and nano-bio interfacing. Another key issue is the role of interconnect with future computation and storage technology. In this part we review the role of interconnect and 3D hyper integration, spintronics, and moletronics. In summary this part and the following prolog lay forth the reasons why electroplating is considered as the key technology for nano-circuits interconnects.

Tuesday, 18 February 2014

SolarCoin cryptocurrency pays you to go green

A new cryptocurrency with a solar-powered twist could be just the incentive we need to make the shift to clean energy. While most cryptocurrencies are just themed copies of Bitcoin – Dogecoin, based on a famous internet meme, is a notable example – SolarCoins are a bit harder to earn.
SolarCoin is based on Bitcoin technology, but in addition to the usual way of generating coins through mining, crunching numbers to try and solve a cryptographic puzzle, people can earn them as a reward for generating solar energy.

People with solar panels on their house will receive solar renewable energy certificates from their energy company in return for feeding a megawatt-hour of electricity back into the grid. These certificates are already traded for cash, but present one to SolarCoin's organisers and you'll receive one coin – they expect to start distribution in a matter of weeks.

True, the coins are worthless at the moment, but if people start using the currency to support solar energy, it should acquire value. SolarCoin Foundation spokesman Nick Gogerty says the initiative is aiming for $20 to $30 per SolarCoin, effectively providing solar panel owners with a crowdfunded feed-in tariff and encouraging more people to take part.

SolarCoin chose solar rather than another renewable technology because investment in solar panels is easier than in wind turbines, for example. "Solar is interesting because it can be very grassroots," says Gogerty. He and a colleague first conceived of an energy-backed asset in 2011 but couldn't make the idea work without a central bank. Bitcoin makes the bank unnecessary. "We're very thankful for Bitcoin leading the change."

Bitcoin has been accused of wasting energy in the past because of the computing power it takes to mine coins, but Gogerty says that SolarCoin is 50 times more energy-efficient because its algorithm allows the total number of coins to be mined faster – and that's before factoring in the energy boost from new solar panels.

If SolarCoin succeeds, the model could even be applied to other environmental projects, such as conserving the rainforest or endangered species. "If someone can come up with the mechanism and the approach, it would be a great thing," says Gogerty.
Jem Bendell of the University of Cumbria, UK, says SolarCoin is an interesting idea. "What it shows you is we can use this technology that provides a distributed, secure, public, global record for other things." But he cautions that many new coins are launching on the back of the Bitcoin gold rush, and not all will last.