Wednesday, February 25, 2015

Look directly at the 'Sunshine' with Oculus VR

Source: http://www.engadget.com/2015/02/25/sunshine-oculus-rift-demo/

Oculus Rift offers some pretty out of this world experiences (punching sharks and becoming a bird immediately come to mind), but getting an up-close and personal with the sun aren't among them. Until now, of course. That, my friends, is what Sunshine Observation Deck is for. If it looks familiar, that's because it's based off of a set from the 2007 Danny Boyle flick, Sunshine -- you know, the one that heavily influenced the original Dead Space and that's by far one of the best sci-fi flicks from the past ten years. Anyhow, the walkthrough lets you explore the movie's solar observation deck and adjoining science lab, witnessing Sol in its fiery majesty. You can't adjust the intensity of the filter brightness, but given that, you don't need to worry about catching a nasty sunburn, either.

Creator Julian Butler aspires for it to be "one of the more calm" experiences available for the Rift, which means you won't likely be running from a zombie hellbent on stopping your mission to drop a bomb inside the sun at the end. Or will you? Check out the video below for a hint.

Filed under: ,

Comments

Source: Oculus VR Share

Read More...

Chrome's 1,000th web experiment visualizes all the others

Source: http://www.engadget.com/2015/02/24/chrome-experiment-1000/

Chrome experiment 1,000 in action

Google has offered a ton of Chrome Experiments to show what modern web technology can do, but it's doing something special for the 1,000th project -- namely, visualizing all the other projects. The effort lets you browse six years' worth of browser-based art, games and other creative works in multiple ways, including a tag-based timeline and a live code editor. To top things off, Google has redesigned the Experiments site so that it scales properly on everything from phones to desktops. You probably won't have time to explore every single web snippet, but it's worth a visit to number 1,000 if you're wondering what you've missed.

Filed under: , ,

Comments

Source: Chrome Experiments, Google Chrome Blog

Read More...

What you need to know about HTTP/2

Source: http://www.engadget.com/2015/02/24/what-you-need-to-know-about-http-2/

What you need to know about HTTP/2

Look at the address bar in your browser. See those letters at the front, "HTTP"? That stands for Hypertext Transfer Protocol, the mechanism a browser uses to request information from a server and display webpages on your screen. A new version of the reliable and ubiquitous HTTP protocol was recently published as a draft by the organization in charge of creating standards for the internet, the Internet Engineering Task Force (IETF). This means that the old version, HTTP/1.1, in use since 1999, will eventually be replaced by a new one, dubbed HTTP/2. This update improves the way browsers and servers communicate, allowing for faster transfer of information while reducing the amount of raw horsepower needed.

Why is this important?

HTTP/1.1 has been in use since 1999, and while it's performed admirably over the years, it's starting to show its age. Websites nowadays include many different components besides your standard HTML, like design elements (CSS), client-side scripting (JavaScript), images, video and Flash animations. To transfer that information, the browser has to create several connections, and each one has details about the source, destination and contents of the communication package or protocol. That puts a huge load on both the server delivering the content and your browser.

All those connections and the processing power they require can lead to slowdowns as more and more elements are added to a site. And if we know nothing else, it's that people can be quite impatient. We've come to expect blazing-fast internet and even the slightest of delays can lead to hair pulling and mumbled swears. For companies, a slow website can translate directly into lost money, especially for online services where long load times mean a bad user experience.

People have been searching for ways to speed up the internet since the days when dial-up and AIM were ubiquitous. One of the more common techniques is caching, where certain information is stored locally as opposed to transferring everything anew each time it's requested. But others have resorted to tricks like lowering the resolution of images and videos; still others have spent countless hours tweaking and optimizing code to cut just milliseconds from their load times. These options are useful, but are really just Band-Aids. So Google decided to dramatically overhaul HTTP/1.1 and create SPDY; the results have been impressive. In general, communication between a server and a browser using SPDY is much faster, even when encryption is applied. At a minimum, the transfer speed with SPDY can improve by about 10 percent and, in some cases, can reach numbers closer to 40 percent. Such has been the success of SPDY that in 2012 the group of Google engineers behind the project decided to create a new protocol based on the technology, and that started the story that leads us to the current HTTP/2 draft.

What is a protocol?

You can think of a protocol as a collection of rules that govern how information is transferred from one computer to another. Each protocol is a little different, but usually they include a header, payload and footer. The header contains the source and destination addresses and some information about the payload (type of data, size of data, etc.). The payload contains the actual information, and the footer holds some form of error detection. Some protocols also support a feature called "encapsulation," which lets them include other protocols inside of their payload section.

You can think of it like sending a letter using snail mail. Our protocol in this case would be defined by the USPS. The letter would require a destination address in a specific format, a return address and postage. The "payload" would be the letter itself and the error detection is the seal on the envelope. If it arrives ripped and without a letter, you'd know there was a problem.

Why is HTTP/2 better?

In a few words: HTTP/2 loads webpages much faster, saving everyone time that otherwise would go to waste. It's as simple as that.

The example below, published by the folks over at HttpWatch, shows transfer speeds increasing more than 20 percent, and this is just one test with web servers not yet fully optimized (the technology will need some time to mature for that). In fact, improvements of around 30 percent seem to be common.

Example of HTTP page load speed (above) against HTTP/2 (below)

HTTP/2 improves speed mainly by creating one constant connection between the browser and the server, as opposed to a connection every time a piece of information is needed. This significantly reduces the amount of data being transferred. Plus, it transfers data in binary, a computer's native language, rather than in text. This means your computer doesn't have to waste time translating information into a format it understands. Other features of HTTP/2 include "multiplexing" (sending and receiving multiple messages at the same time), the use of prioritization (more important data is transferred first), compression (squeezing information into smaller chunks) and "server push," where a server makes an educated guess about what your next request will be and sends that data ahead of time.

So when will we get to enjoy the benefits of HTTP/2?

There's no real start date for the use of HTTP/2, and many people may already be using it unknowingly. The draft submitted on February 11th will expire in six months (August 15th, to be precise). Before expiring, it has to be confirmed and become a finished document, called an "RFC," or a new draft with changes has to be published.

As a side note, we should mention that the term "RFC" comes from "Request For Comments," but it's really a name for a finalized document used by the IETF. Also, an RFC is not a requirement, but more of a suggestion of how things should be designed. (Confusing right?) However, for a protocol to work properly, everyone has to follow the same rules.

The HTTP/2 technology is already baked into many web servers and browsers, even if it's still just a draft. For example, Microsoft supports HTTP/2 on Internet Explorer under the Windows 10 Technical Preview; Chrome also supports it (while it's disabled by default, you can easily enable it); and Mozilla has had it available since Firefox Beta 36.

If we talk about web servers, you should know that IIS (the Windows web server) already supports HTTP/2 under Windows 10 and it's expected that Apache and Nginx will offer support very soon (SPDY is already supported through extensions). This means that sooner, rather than later, we will all be using HTTP/2. And chances are you won't even realize it when the switch is made unless you're in the habit of timing load times for your favorite sites. Plus, you'll still just see "http" or "https" in the address bar, so, life will continue as usual, but a bit faster.

[Image credits: Shutterstock (Server rack); HttpWatch (Benchmark charts)]

Comments

Source: IETF

Read More...

AMD's next laptop processor is mostly about battery life

Source: http://www.engadget.com/2015/02/25/amd-carrizo-processor/

AMD Carrizo

Intel isn't the only chip giant championing battery life over performance this year. AMD has revealed Carrizo, a processor range that's focused heavily on extending the running time of performance-oriented laptops. While there will be double-digit boosts to speed, there's no doubt that efficiency is the bigger deal here. The new core architecture (Excavator) is just 5 percent faster than its Kaveri ancestor, but it chews up 40 percent less energy at the same clock rate -- even the graphics cores use 20 percent less juice.

Not that this is the only real trick up AMD's sleeve. Carrizo is the first processor to meet the completed Heterogeneous System Architecture spec, which lets both the CPU and its integrated graphics share memory. That lets some tasks finish faster than they would otherwise (since you don't need as many instructions), and it could provide a swift kick to both performance and battery life in the right conditions. You'll also find dedicated H.265 video decoding, so this should be a good match for all the low-bandwidth 4K videos you'll stream in the future.

The new chip is pretty promising as a result. With that said, its creator will undoubtedly be racing against time. Carrizo is expected to reach shipping PCs in the second quarter of the year, or close to Intel's mid-year target for its quad-core Broadwell processors. You may find shiny new AMD and Intel chips in PCs at around the same time -- that's good news if you're a speed junkie, but it's not much help to AMD's bottom line.

Filed under: , ,

Comments

Via: PCWorld

Source: AMD

Read More...

Tuesday, February 24, 2015

I would totally live in the world's first carbon-positive house

Source: http://sploid.gizmodo.com/i-would-totally-live-in-the-worlds-first-carbon-positiv-1687547152

According to Dwell, this is the "world's first carbon-positive prefabricated house," which produces more "more energy than its uses." Its designer, ArchiBlox, claims it's "expected to offer the same environmental benefits as 6,095 native Australian trees." Simple, clean design—put it in a forest and I will move in.

Read more...


Read More...