Developer Brandon Fiquett has hacked Siri on the iPhone 4S to enable control of his car
What if you could ask your iPhone to start your car?
Developer Brandon Fiquett has gotten his phone to do just that -
hacking Siri on the iPhone 4S to work along with his Viper SmartStart
module to not only start his car, but also to arm and disarm the car's
alarm, lock and unlock doors, and even pop the vehicle's trunk.
Fiquett was able perform the hack by using Siri Proxy created by another hacker @plamoni, and then running a custom PHP script on his home computer along with some custom plugins to communicate with Siri.
Originally the system was only able to respond to commands such as
"Vehicle Arm", "Vehicle Disarm", "Vehicle Start", "Vehicle Stop",
"Vehicle Pop Trunk", and "Vehicle Panic," with an update later adding
more conversational commands such as "Start my car", "Lock my car", and
"Pop my trunk."
The hack is currently in the "proof of concept stage," and isn't
something you're likely to be able to replicate unless you really know
your way around some code. If you are super tech-savvy, however, Fiquett
has published links to the code on his website you can use to recreate the hack with your own phone and vehicle.
This particular hack is the work of just one man, and with Siri only
being available a little over a month, it will be interesting to see
what other functionality the developer community is able to create over
time, as well as what other functionality Apple may add to Siri in the
future.
The re-purposed ProMetal 3D printer used by the WSU researchers to create objects in a bone-like material
Over the past decade, 3D printing technology has made the
transition from huge expensive units used by industry to produce
prototype components to small desktop units like the DIY MakerBot Thing-O-Matic
that are within the reach of home users. But home users looking to
produce custom household objects aren't the only ones set to benefit
from advances in 3D printing technology, with 3D bio-printers offering
the prospect of creating organs on demand for replacement surgery. Now
researchers have used a 3D printer to create a bone-like material that
could be used to create customized scaffolds to stimulate the growth of
replacement bone tissue.
For their work, researchers at Washington State University (WSU)
optimized a commercially available ProMetal 3D printer designed to make
metal objects. The re-purposed printer uses an inkjet to spray a plastic
binder over a bed of powder in layers just 20 microns thick - that's
about half the width of a human hair.
When paired with actual bone and used with some bone growth factors,
the resulting bone-like material acts as a scaffold for new bone to grow
on before dissolving with no apparent ill effects. The researchers say
this would allow customized scaffolds to be produced for new bone to
grow on in orthopedic procedures, dental work and to deliver medicine
for treating osteoporosis.
"If a doctor has a CT scan of a defect, we can convert it to a CAD
file and make the scaffold according to the defect," said Susmita Bose, a
professor in WSU's School of Mechanical and Materials Engineering and
co-author of the WSU study.
The researchers say they have already seen promising results in vivo
tests on rats and rabbits and after just a week in a medium with
immature human bone cells, the scaffold was supporting a network of new
bone cells. The main finding of their research was that the addition of
silicon and zinc more than doubled the strength of the main material,
calcium phosphate.
Research into the use of three-dimensional scaffolds to stimulate the
growth of bone and/or tissue within the body isn't new, with
researchers at MIT using a similar method to stimulate bone and cartilage growth when transplanted into knees and other joints. Meanwhile, Columbia University researchers have been looking at growing dental implants
in a patient's empty tooth socket using a tooth-shaped scaffold. But
the use of 3D printing technology would make it easy to create scaffolds
specifically tailored for individual patients.
The WSU team's study has been published in the journal Dental Materials.
Professor Bose explains the technology in the following video:
Researchers from North Carolina State University have developed a
new technique for transforming two-dimensional print output into 3-D
structures, using nothing but light
Researchers from North Carolina State University have developed a
new technique for transforming two-dimensional print output into 3-D
structures, using nothing but light. A pre-stressed polymer sheet is
fed into a conventional inkjet printer, which applies black stripes to
areas designed to be used as hinges. The desired pattern is then cut out
and subjected to infrared light. The material contracts at the hinges,
and the sheet collapses into a predefined 3D structure. Dr. Michael
Dickey, who co-authored a paper describing the research, says the
process could be used for packaging purposes and could be applied to
high-volume manufacturing.
The idea is based on a remarkably simple principle. The printed-on
black stripes absorb more energy that the remaining part of the polymer
sheet, so the contraction manifests itself exactly where it's needed,
which is at the hinges. What is more, the technique uses existing
materials and is compatible with widely used commercial printing
methods, such as roll-to-roll printing or screen printing.
Three-dimensional objects, such as cubes or pyramids, can now be printed
using techniques that are inherently two-dimensional.
It is possible to vary the extent to which a hinge folds by changing
the width of the black stripe. The wider the hinge, the further it
folds, e.g. 90 degrees for a cube and 120 degrees for a pyramid. Also,
wider hinges fold quicker, as the energy-absorbing area is larger. By
patterning the lines on either side of the material, the researchers can
decide which direction the hinges should fold, so the resulting
structures can be pretty complex.
A computer model of the process shows that the surface temperature of
the hinge must be higher than the glass transition temperature of the
material used (i.e. the temperature at which the material changes its
behavior and becomes flexible). Another important finding is that the
heating needs to be targeted at the hinges, and not at the whole sheet,
for the folding to take place.
The work is described in a paper recently published in the journal Soft Matter. See the video below for a presentation of the process.
A plastic material inspired by the leaves of the aquatic weed
Salvinia molesta may lead to a coating that makes ships more buoyant and
hydrodynamic (Photo: Eric Guinther)
It may be an invasive weed that's fouling waterways in the U.S., Australia and other countries, but it turns out that Salvinia molesta
has at least one good point - it's inspired a man-made coating that
could help ships stay afloat. The upper surface of the floating plant's
leaves are coated with tiny water-repellent hairs, each of which is
topped with a bizarre eggbeater-like structure. These hairs trap a layer
of air against the leaf, reducing friction and providing buoyancy,
while the eggbeaters grab slightly at the surrounding water, providing
stability. Scientists at Ohio State University have successfully
replicated these hairs in plastic, creating a buoyant coating that is
described as being like "a microscopic shag carpet."
In laboratory tests, the man-made coating performed just like the Salvinia
hairs. In both cases, water droplets couldn't penetrate between the
hairs, but did cling to the uniquely-shaped tips - they even hung on
when the surface was tilted by 90 degrees. The adhesive force of the
coating was measured at 201 nanoNewtons (billionths of a Newton), while
the natural hairs managed an almost identical 207 nanoNewtons. While
these numbers are far below those attained by substances such as
adhesive tape, they are similar to those of gecko feet - and geckos seem
to have no problem climbing walls.
"I've studied the gecko feet, which are sticky, and the lotus leaf, which is slippery," said lead researcher Bharat Bhushan. "Salvinia combines aspects of both."
If commercialized, the Ohio State-developed
material could conceivably be applied to the hulls of ships or
submarines. It is believed that it could provide the vessels with more
flotation, while helping them sit in the water with more stability and
move through it more easily.
A paper on the research was recently published in the Journal of Colloid and Interface Science.
Luis Cruz's Eyeboard - an eye tracking computer interface for the disabled
This unique and worthwhile project was put together by a
17-year-old electronics and programming whiz from Honduras, of all
places. The Eyeboard system is a low-tech eyeball-tracking device that
allows users with motor disabilities to enter text into a computer using
eye gestures instead of a physical interface. This kind of system is
not unique - there's plenty of eye tracking interfaces
out there - but Luis Cruz has figured out a way to build the full
system into a set of glasses for less than US$300, putting easier
communication within reach of users in developing countries. He's also
releasing the software as open source to speed up development.
Personally, I spent my year as a 17-year-old in a series of heroic
failures trying to impress girls with my air guitar.
Tracking eyeball movements is far from a new science - in fact,
people have been studying eye movements for more than 130 years. Early
on, the main focus was on understanding how the process of reading works
- the way the eyes skip and dart across rows of text to take in written
information. Congratulations, you're now aware that your eyes are
jumping from word to phrase to word as you read this article!
While eyeball tracking used to be achieved using painstaking manual
mapping of direct observations, more recent technologies have made it
much easier and more precise. High-tech contact lenses, for example, can
now be used to map and record eye movement to provide data that's used
in everything from driver training to sports development to gaming,
virtual reality and medical research. Still, the dominant commercial
application by far is in advertising and usability - working out how
different designs steer the eye towards a final goal most effectively.
But for people with certain motor disabilities, particularly those
who don't have good control over their hands or voices, eye tracking can
take on a much more important role, as a hands-free computer interface
that can be a fantastic aid to communication, and a much easier
alternative than the head wand or mouth stick, which are used to tap on a
keyboard.
Unfortunately, eyeball tracking computer interfaces have proven to be
quite expensive on the market - anywhere from several thousand to more
than US$10,000 when combined with software. This puts them out of reach
of many affected people, particularly in developing countries where that
sort of money could represent several years of average earnings.
And that's where 18-year-old Honduran high school student Luis Cruz
is stepping in. Two years ago, Cruz indulged his love of electronics and
software tinkering by building a video game system - but in the last 12
months he's turned his focus to far less teenage pursuits.
Cruz has spent the last year building and developing an eye tracking
computer interface that works on the principles of electrooculography -
that is, measuring the resting potential of the retina using electrodes
placed just beside the eyes.
As it turns out, the human eye is polarized - the front of the eye
carries a positive charge and the rear of the eye has a group of
negatively charged nerves attached to the retina. So when the eye is
moved, you can use electrodes to measure the change in the dipole
potential of the eye through the skin.
It's a fairly lo-fi input - it doesn't track eye movements with
anywhere near the accuracy of a high tech contact lens or video tracking
system - but on the other hand, it's extremely cheap, and so uninvasive
that Cruz has managed to mount the electrodes in a pair of sunglasses.
And it's good enough at tracking macro eye movements to allow the next
phase of the project: the computer interface software.
Although Cruz's sensor glasses can only track horizontal eye
movements at this stage, he's developed a piece of software that takes
those inputs and uses them to choose letters in a grid, so that users
can type entire words using just their eye motions.
The Eyeboard system is still in a fairly embryonic state at this
stage, but Cruz believes the hardware can be produced cheaply - as in,
his prototypes cost somewhere between US$200-300 for a set of glasses -
and he's releasing the software as open source to enable quicker
development of tools like autocomplete which will make users'
communication even quicker and more fluid. Here's a little more about the effectiveness of electrooculography in computer interfaces.
Clearly, this is a kid with some serious drive - check out his technical documentation for a closer look at the project. If anyone feels like giving Cruz a helping hand, he's looking for PayPal donations
to help him towards a college education in America. Given how much he's
achieved in Honduras at such a young age, his potential and motivation
is clear, even if his home country doesn't afford a lot of
opportunities.
We wish Cruz the best of luck and hope to see the Eyeboard project
develop into something that can help the disabled community in Honduras
and around the world.
Russian Aircraft Company's MiG
is best known for its fighter planes which have been used by the USSR,
China, North Korea and North Vietnam since the beginning of WWII.
These days, the former Government-owned RAC MiG is a publicly traded
entity and competes on the open market with its technologies, having
more than 1600 of its MiG-29 fighters in operation in 25 countries.
Now MiG is claiming a major first in military aviation with the launch of a 3D flight simulator at the Dubai Air Show,
providing volumetric visualization of beyond-the-cockpit space for
trainee top guns. The simulator comes complete with the MiG-29's cockpit
and actual control systems.
Existing military simulators have a 2D projection system, at best
projected on a wrap-around screen, imitating surround environment in
the more or less precise manner, but they do not provide volumetric
visualization.
As a result, pilots face problems in assessing the distance to the
key virtual objects being monitored, as well as size of those objects,
making it difficult to perform precisely when flying the virtual fighter
in close proximity to other aircraft, in air refueling, or on the
approach to a landing strip or aircraft carrier.
Some advanced systems offer better visualization of actual distances
in the beyond-the-cockpit space but such simulators are huge in size,
and have a limited arc of visibility.
MiG claims it has developed a better visualization system with a much
higher level of accuracy from the virtual cockpit. The system produces
stereo imagery of the aerial environment using the same type of glasses
as are used in 3D cinema to create a far more realistic illusion of real
flight, and experience so far suggests even inexperienced pilots can
easily assess distances to and dimensions of other aircraft and objects.
Guests at this week's Dubai Air Show can try the 3D MiG-29 simulator for themselves.
LuminAID is an extremely lightweight and easy to transport, solar-powered inflatable waterproof lantern
Although it can be considered as a basic human need alongside food,
water and shelter, 1.6 billion people all over the world have no access
to stable and safe source of light. It's a situation that two bright
young Architecture graduates are aiming to combat with the LuminAID
solar-powered lantern. Like the Solar Pebble
initiative, the LuminAID lantern is designed to address dependence on
kerosene lamps in the developing world and its extremely lightweight and
easy to transport inflatable design is also targeted at use in disaster
relief situations ... plus it makes a very handy addition to your
camping kit.
At first glance the LuminAID resembles a simple plastic bag, but its
coating is made of flexible, semi-transparent waterproof material with a
printed dot matrix to diffuse light and it incorporates a very thin
solar panel, bright LEDs that provide the light source and a reinforced
handle for easy carrying.
LuminAID is fully charged after 4-6 hours in sunlight makes and is
reportedly good for up to four hours of lighting at 35 lumens, or up to
six hours at 20 lumens. The battery can be recharged 800 times.
Founded by Anna Stork and Andrea Sreshta, both graduates from the
Columbia Graduate School of Architecture, Planning & Preservation in
New York LuminAID Lab
is
currently field testing the inflatable lamp is in Rajasthan, India,
where fifty percent of households lack electricity. It's being
distributed to rural schools, homes and small-business owners.
LuminAID Lab says 100 percent of funds will go to producing and
distributing the LuminAID lights to you and to our community projects.
It's possible to back LuminAID project by purchasing the lamp via on IndieGoGo.
A pledge of US$25 results in purchasing two units - someone in need
will obtain one lamp for free, while you'll get your own too.
For US$50 you get a carabiner and a LuminAID t-shirt (Adventure
Pack), or print pattern on the coating designed by graphic designer
Hillary Cribben as well. You can also back the project by US$10 without
obtaining a lamp, or donate a larger sum (US$100 - 1,000) to purchase
multiple lamps. The production of lamps will take 45-60 days.
Sounds like a great deal to us ... and a fantastic project.
A True 3D pyramid hovers in mid-air (Image: DigInfo)
Engineers from Burton Inc. in Japan have rolled out a "True 3D"
display, which evolved from work begun five years previously by teams at
Keio University and Japan's national institute of Advanced Industrial
Science and Technology (AIST). While most 3D displays available today
involve a form of optical illusion that depends on the parallax or
disparity inherent in human binocular vision, this new system, which can
function in air or under water, needs no screen of any sort, and the
effect is quite impressive.
"Most current 3D devices project pictures onto a 2D screen, and make
the pictures appear 3D through an optical illusion. But this device
actually shows images in mid-air, so a feature of this system is that it
enables 3D objects to be viewed naturally." said Burton engineer Hayato
Watanabe.
The Burton system functions by focusing laser light into points which
stimulate oxygen and nitrogen molecules in the air to a plasma
excitation level. Computer control of the laser's position can cause the
dots to coalesce into recognizable patterns very similar to a hologram,
but literally in thin air.
"This system can create about 50,000 dots per second, and its frame
rate is currently about 10-15 fps. But we're working to improve the
frame rate to 24-30 fps," Watanabe explained.
In the demonstration video following this article, a green laser
shines up from below into a small tank of water, but to create displays
in air, more powerful lasers are needed. By combining red, green and
blue lasers, the Burton team has managed to generate color images, which
opens up a vast array of possible uses as the technology improves.
"As the first application for this, we thought of digital signage.
Also, because this system makes 3D objects look natural, it could be
used for analyzing 3D objects, and if its precision can be improved, it
could be used in health care, too," said Watanabe.
Indeed, it seems we could be witnessing the birth of the technology
that might one day allow autonomous robots to beam important messages
such as, say, "Help me Obi-Wan Kenobi, you're my only hope." We'll just
have to wait and see.
Source: DigInfo
The Robothespian presentation, telepresence and acting humanoid
robot by Engineered Arts Ltd. is designed for science education purposes
Engineered Arts Ltd.'s Robothespian is probably one of the first
professional robotic actors who made it into the real world (sorry,
T-1000). Its elegant movements, extraordinary body language and
emotion-conveying skills make it a great communicator. It may not be
capable of helping the elderly, it's not nearly as agile and athletic as Boston Dynamics' PETMAN, and it's unlikely to be of any use during eye surgery.
But that's OK. Robothespian is an artist. A robot burdened with the
task of exploring the ephemeral territory of the arts and claiming it
for his robotic brethren. And it seems it is extremely well equipped to
get the job done.
Thanks to LCD eyes that convey emotions and feelings to match what is
being said, along with emotive LED lighting in its body shell,
Robothespian has become proficient at the art of mesmerizing its
audience. If you need a captivating story-teller, just hire a
professional voice-over artist once and then leave Robothespian to
deliver the same powerful act over and over again.
The robot seems ideal for science education purposes, partly because
it's engaging and very cost-effective (no cigarette or lunch breaks),
and partly because in a decade or two, the robotics industry is likely
to become as significant as the automotive industry. Robothespian's task
of entertaining and educating today's schoolchildren in science museums
may one day turn out to have played a very important role.
It also has all the makings of a good actor. It can read text and add
expression to your script, plus it can sing, dance and perform on stage
without ever succumbing to stage fright. Thanks to motion capture
software, it is able to mimic the movements of someone in the audience.
It can also recognize faces and track people in a crowd.
Robothespian is a web connected device and it can be controlled via
an online interface. You can see what the robot sees, and you can tell
it what to do or say from virtually anywhere in the world, which makes
it a powerful telepresence tool. The robot can also handle interaction
with humans independently, as it's capable of looking up answers to
queries on the Web. It can also control theater lighting and
multichannel sound. Most importantly, however, it can work day and night
without a break and without compromising the quality of the
performance.
Robothespian comes in three different flavors, with an entry-level
model 3 costing GBP55,000 (US$87,208). Therefore, it could be considered
very cost effective, especially if you take into account that the robot
adds a level of audience engagement that is beyond that of most
presenters. So, what's in the box?
The humanoid is 1m 75cm (5'9") tall, weighs in at 33 kg (73 lbs),
sports an aluminum chassis and a body shell made of polyethylene
terephthalate. It offers over 30 axes of movement, with some movements
handled by air muscles and other by servo motors. There is an integrated
computer on board with a 1.6 GHz Atom processor and a 32GB SSD to store
motion information and control software. A head-mounted camera enables
remote image streaming for telepresence purposes (again, face tracking
capabilities come in handy). There is also an integrated 20W audio amp
for verbal delivery. The robot comes with an interface PC and a 19-inch
touchscreen housed in a standing console. To operate, it requires an
electrical supply, a compressed air supply and an Internet connection
used for remote maintenance and diagnostics.
Robothespians have been around for a while now. The project started
in January 2005 and the goal was to supply a troop of robotic actors to
perform at the "Mechanical Theatre" at the Eden Project in Cornwall.
Model 2 came out nearly three years ago and was the first to gain wider
attention with seven robots sold. Currently more than 20 Model 3s work
in museums and science education projects around the world. They are
used in presentations, as greeters, guides, or even as robotic
spokespeople (since you can automate a performance in virtually any
language, the Robothespian is truly a citizen of the world).
Model 4 is planned to be released about two years from now, but the
final capabilities of the new model have not yet been decided. The
creators considered moving to a fully bipedal robot (as opposed to one
attached to a round stabilizing base), but this will only be done if
they are able to overcome safety issues related to the possibility of
the robot falling over in a public environment.
And what about the robot's main vocation - acting? Well, you can buy a
package consisting of three robotic thespians, complete with a stage, a
lighting and projector rig, speakers, and an integrated control system.
Engineered Arts
take care of setting the whole thing up, provide the necessary training
and online support. All that is left to do is to create actions and
save them to a timeline using the open source 3D animation software
called Blender. The animation is then used to program the robotic
actors' performance.
One such robotic theater is located at the Copernicus Science Centre
in Warsaw, Poland, where you can watch a performance called "Prince
Ferrix and Princess Crystal" based on a short story by Polish sci-fi
author Stanislaw Lem. Check out the Copernicus Science Centre's website for more details and see the video below for a sample of the robothespian's exquisite acting skills.