Sexy Lady For You

Powered by Blogger.

Blog Archive

Most Sexiest New Nepali Song 2014 | Bhattiko Sharab by Gurung Solti | Sexy Nepali Song


The video uploaded above is for Entertainment purpose only. We don't claim any copyright for the above video. If you want to claim copyright for above video pls kindly message or mail us on our email : shahsantu2014@gmail.com . We will immediately remove the video.


Link to Watch Song

World's third largest smartphone maker Xiaomi made just $56 million profit last year


        Xiaomi Technology, often referred to as the ‘Apple of China’, made just 347.48 million yuan ($56.15 million) of profit
    on 26.58 billion yuan ($4.30 billion) in revenue last year, according to Reuters. The information was obtained from the financial results
    that the company disclosed after it purchased a 1.3 percent stake in home appliance maker Midea for 1.27 billion yuan ($205 million).
        The numbers are particularly interesting as they are in contrast with a recent Wall Street Journal report which claimed that Xiaomi’s net
    profit nearly doubled last year, rising 84 percent to 3.46 billion yuan ($566 million) from 1.88 billion yuan in 2012. The paper called it "a lucrative
    business in an industry where most players selling cheap handsets struggle to break even.”
        The numbers also underline the fact that despite being the number one smart phone manufacturer in China, the four-year-old startup, which
    recently became the world’s third largest smart phone company, still lags far behind the likes of Samsung and Apple in terms of revenue and profits.
    Just to give you an idea, Apple made around $25.4 billion (nearly six times) in revenue in Greater China during the same period.
        Xiaomi is famous for selling smart phones with killer specs at ridiculously low prices, an apparent attempt by the company to increase its
    market share at the expense of profit. Its cheapest smart phone, the Redmi 1S, starts at 699 yuan ($114), and its latest flagship model, the Mi4,
    retails at 1,999 yuan ($327). The company also manages to keep its marketing costs low by counting on fans for social-media PR.
    The Reuters report also reveals that 77.6 percent of the company is owned by chairman and CEO Lei Jun, while the rest is split among unnamed shareholders.

    Source from: http://www.techspot.com/

Our connection to content

It’s often said that humans are wired to connect: The neural wiring that helps us read the emotions and actions of other people may be a foundation for human empathy.
    But for the past eight years, MIT Media Lab spinout Innerscope Research has been using neuroscience technologies that gauge subconscious emotions by monitoring brain and body
    activity to show just how powerfully we also connect to media and marketing communications.
        “We are wired to connect, but that connection system is not very discriminating. So while we connect with each other in powerful ways, we also connect with characters on
        screens and in books, and, we found, we also connect with brands, products, and services,” says Innerscope’s chief science officer, Carl Marci, a social neuroscientist
        and former Media Lab researcher.
            With this core philosophy, Innerscope — co-founded at MIT by Marci and Brian Levine MBA ’05 — aims to offer market research that’s more advanced than traditional methods,
        such as surveys and focus groups, to help content-makers shape authentic relationships with their target consumers.
        “There’s so much out there, it’s hard to make something people will notice or connect to,” Levine says. “In a way, we aim to be the good matchmaker between content and people.”
        So far, it’s drawn some attention. The company has conducted hundreds of studies and more than 100,000 content evaluations with its host of Fortune 500 clients, which
        include Campbell’s Soup, Yahoo, and Fox Television, among others.
    And Innerscope’s studies are beginning to provide valuable insights into the way consumers connect with media and advertising. Take, for instance, its recent project to
    measure audience engagement with television ads that aired during the Super Bowl.
    Innerscope first used biometric sensors to capture fluctuations in heart rate, skin conductance, breathing, and motion among 80 participants who watched select ads
    and sorted them into “winning” and “losing” commercials (in terms of emotional responses). Then their collaborators at Temple University’s Center for Neural Decision
    Making used functional magnetic resonance imaging (fMRI) brain scans to further measure engagement.
        Ads that performed well elicited increased neural activity in the amygdala (which drives emotions), superior temporal gyrus (sensory processing),
        hippocampus (memory formation), and lateral prefrontal cortex (behavioral control).
    “But what was really interesting was the high levels of activity in the area known as the precuneus — involved in feelings of self-consciousness — where it is believed
    that we keep our identity. The really powerful ads generated a heightened sense of personal identification,” Marci says.
        Using neuroscience to understand marketing communications and, ultimately, consumers’ purchasing decisions is still at a very early stage, Marci admits — but the Super
    Bowl study and others like it represent real progress. “We’re right at the cusp of coherent, neuroscience-informed measures of how ad engagement works,” he says.
   
    Capturing “biometric synchrony”
        Innerscope’s arsenal consists of 10 tools: Electroencephalography and fMRI technologies measure brain waves and structures. Biometric tools — such as wristbands and
    attachable sensors — track heart rate, skin conductance, motion, and respiration, which reflect emotional processing. And then there’s eye-tracking, voice-analysis,
    and facial-coding software, as well as other tests to complement these measures.
        Such technologies were used for market research long before the rise of Innerscope. But, starting at MIT, Marci and Levine began developing novel algorithms,
    informed by neuroscience, that find trends among audiences pointing to exact moments when an audience is engaged together — in other words, in “biometric synchrony.”
        Traditional algorithms for such market research would average the responses of entire audiences, Levine explains. “What you get is an overall level of
    arousal — basically, did they love or hate the content?” he says. “But how is that emotion going to be useful? That’s where the hole was.”
        Innerscope’s algorithms tease out real-time detail from individual reactions — comprising anywhere from 500 million to 1 billion data points — to locate instances
    when groups’ responses (such as surprise, excitement, or disappointment) collectively match.
        As an example, Levine references an early test conducted using an episode of the television show “Lost,” where a group of strangers are stranded on a tropical island.
    Levine and Marci attached biometric sensors to six separate groups of five participants. At the long-anticipated moment when the show’s “monster” is finally revealed,
    nearly everyone held their breath for about 10 to 15 seconds.
    “What our algorithms are looking for is this group response. The more similar the group response, the more likely the stimuli is creating that response,” Levine explains.
    “That allows us to understand if people are paying attention and if they’re going on a journey together.”

    Getting on the map
        Before MIT, Marci was a neuroscientist studying empathy, using biometric sensors and other means to explore how empathy between patient and doctor can improve patient health.
            “I was lugging around boxes of equipment, with wires coming out and videotaping patients and doctors. Then someone said, ‘Hey, why don’t you just go
            to the MIT Media Lab,’” Marci says. “And I realized it had the resources I needed.”
        At the Media Lab, Marci met behavioral analytics expert and collaborator Alexander “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences, who helped
    him set up Bluetooth sensors around Massachusetts General Hospital to track emotions and empathy between doctors and patients with depression.  
    During this time, Levine, a former Web developer, had enrolled at MIT, splitting his time between the MIT Sloan School of Management and the Media Lab. “I wanted to
    merge an idea to understand customers better with being able to prototype anything,” he says.
        After meeting Marci through a digital anthropology class, Levine proposed that they use this emotion-tracking technology to measure the connections of audiences to media.
    Using prototype sensor vests equipped with heart-rate monitors, stretch receptors, accelerometers, and skin-conductivity sensors, they trialed the technology with students
    around the Media Lab.
        All the while, Levine pieced together Innerscope’s business plan in his classes at MIT Sloan, with help from other students and professors. “The business-strategy
    classes were phenomenal for that,” Levine says. “Right after finishing MIT, I had a complete and detailed business plan in my hands.”
        Innerscope launched in 2006. But a 2008 study really accelerated the company’s growth. “NBC Universal had a big concern at the time: DVR,” Marci says. “Were people who
    were watching the prerecorded program still remembering the ads, even though they were clearly skipping them?”
        Innerscope compared facial cues and biometrics from people who fast-forwarded ads against those who didn’t. The results were unexpected: While fast-forwarding,
    people stared at the screen blankly, but their eyes actually caught relevant brands, characters, and text. Because they didn’t want to miss their show, while fast-forwarding,
    they also had a heightened sense of engagement, signaled by leaning forward and staring fixedly.
    “What we concluded was that people don’t skip ads,” Marci says. “They’re processing them in a different way, but they’re still processing those ads. That was one of those
    insights you couldn’t get from a survey. That put us on the map.”
        Today, Innerscope is looking to expand. One project is bringing kiosks to malls and movie theaters, where the company recruits passersby for fast and cost-effective results.
    (Wristbands monitor emotional response, while cameras capture facial cues and eye motion.) The company is also aiming to try applications in mobile devices, wearables,
    and at-home sensors.
        “We’re rewiring a generation of Americans in novel ways and moving toward a world of ubiquitous sensing,” Marci says. “We’ll need data science and algorithms and
        experts that can make sense of all that data.”

    Source from: http://newsoffice.mit.edu/

Manual control

When you imagine the future of gesture-control interfaces, you might think of the popular science-fiction films “Minority Report” (2002) or “Iron Man” (2008).
        In those films, the protagonists use their hands or wireless gloves to seamlessly scroll through and manipulate visual data on a wall-sized, panoramic screen.
    We’re not quite there yet. But the brain behind those Hollywood interfaces, MIT alumnus John Underkoffler ’88, SM ’91, PhD ’99 — who served as scientific advisor for both
    films — has been bringing a more practical version of that technology to conference rooms of Fortune 500 and other companies for the past year. 
    Underkoffler’s company, Oblong Industries, has developed a platform called g-speak, based on MIT research, and a collaborative-conferencing system called Mezzanine that allows
    multiple users to simultaneously share and control digital content across multiple screens, from any device, using gesture control.
    Overall, the major benefit in such a system lies in boosting productivity during meetings, says Underkoffler, Oblong’s CEO. This is especially true for clients who
    tend to pool resources into brainstorming and whose meeting rooms may remain open all day, every day.
    “If you can make those meetings synthetically productive — not just times for people to check in, produce status reports, or check email surreptitiously under the table — that
    can be electrifying force for the enterprise,” he says.
    Mezzanine surrounds a conference room with multiple screens, as well as the “brains” of the system (a small server) that controls and syncs everything. Several Wii-like wands,
    with six degrees of freedom, allow users to manipulate content — such as text, photos, videos, maps, charts, spreadsheets, and PDFs — depending on certain gestures they make with the wand.
    That system is built on g-speak, a type of operating system — or a so-called “spatial operating environment” — used by developers to create their own programs that run like Mezzanine.
    “G-speak programs run in a distributed way across multiple machines and allow concurrent interactions for multiple people,” Underkoffler says. “This shift in thinking — as if
    from single sequential notes to chords and harmonies — is powerful."
    Oblong’s clients include Boeing, Saudi Aramco, SAP, General Electric, and IBM, as well as government agencies and academic institutions, such as Harvard University’s Graduate
    School of Design. Architects and real estate firms are also using the system for structural designing.

    Putting pixels in the room
        G-speak has its roots in a 1999 MIT Media Lab project — co-invented by Underkoffler in Professor Hiroshi Ishii’s Tangible Media Group — called “Luminous Room,” which
    enabled all surfaces to hold data that could be manipulated with gestures. “It literally put pixels in the room with you,” Underkoffler says.
    The group designed light bulbs, called “1/0 Bulbs,” that not only projected information, but also collected the information from a surface it projected onto.
    That meant the team could make any projected surface a veritable computer screen, and the data could interact with, and be controlled by, physical objects.
    They also assigned pixels three-dimensional coordinates. Imagine, for example, if you sat down in a chair at a table, and tried to describe where the front,
    left corner of that table was located in physical space. “You’d say that corner is this far off the floor, this far to the right of my chair, and this much in front of me,
    among other things,” Underkoffler explains. “We started doing that with pixels.”
        One application for urban planners involved placing small building models onto a 1/0 Bulb projected table, “and the pixels surrounded the model,” Underkoffler says. This
    provided three-dimensional spatial information, from which the program casted accurate, digital shadows from the models onto the table. (Changing the time on a digital clock
    changed the direction of the shadows.)

    In another application, the researchers used a glass vase to manipulate digital text and image boxes that were projected onto a whiteboard. The digital boxes were linked to
        the vase in a circle via digital “springs.” When the vase moved, all the graphics followed. When the vase rotated, the graphics bunched together and “self-stored” into
    the vase; when the vase rotated again, the graphics reappeared in their first form.
        These initial concepts — using the whole room as a digital workplace — became the foundation for g-speak. “I really wanted to get the ideas out into the world in a
    form that everyone could use,” Underkoffler says. “Generally, that means commercial form, but the world of movies came calling first.” “The world’s largest focus group”

    Underkoffler was recruited as scientific advisor for Steven Spielberg’s “Minority Report” after meeting the film’s crew, who were searching for novel technology ideas at
        the Media Lab. Later, in 2003, Underkoffler reprised his behind-the-scenes gig for Ang Lee’s “Hulk,” and, in 2008, for Jon Favreau’s “Iron Man,” which both depicted
    similar technologies.
        Seeing this technology on the big screen inspired Underkoffler to refine his MIT technology, launch Oblong in 2006, and build early g-speak prototypes — glove-based
    systems that eventually ended up with the company’s first customer, Boeing.
        Having tens of millions of viewers seeing the technology on the big screen, however, offered a couple of surprising perks for Oblong, which today is headquartered in
    Los Angeles, with nine other offices and demo rooms in cities including Boston, New York, and London. “It might have been the world’s largest focus group,” Underkoffler says.
        Those enthused by the technology, for instance, started getting in touch with Underkoffler to see if the technology was real. Additionally, being part of a big-screen
    production helped Underkoffler and Oblong better explain their own technology to clients, Underkoffler says. In such spectacular science-fiction films, technology competes
    for viewer attention and, yet, it needs to be simplified so viewers can understand it clearly.
        “When you take technology from a lab like at MIT, and you need to show it in a film, the process of refining and simplifying those ideas so they’re instantly legible on
    screen is really close to the refinement you need to undertake if you’re turning that lab work into a product,” he says. “It was enormously valuable to us to strip away
    everything in the system that wasn’t necessary and leave a really compact core of user-interface ideas we have today.”
        After years of writing custom projects for clients on g-speak, Oblong turned the most-requested features of these jobs — such as having cross-platform and multiple-user
    capabilities — into Mezzanine. “It was the first killer application we could write on top of g-speak,” he says. “Building a universal, shared-pixel workspace has enormous
    value no matter what your business is.”
    Today, Oblong is shooting for greater ubiquity of its technology. But how far away are we from a consumer model of Mezzanine? It could take years, Underkoffler admits:
    “But we really hope to radically tilt the whole landscape of how we think about computers and user interface.”

        Source from: http://newsoffice.mit.edu/

Hewlett Foundation funds new MIT initiative on cybersecurity policy

MIT has received $15 million in funding from the William and Flora Hewlett Foundation to establish an initiative aimed at laying the foundations for a smart, sustainable
cybersecurity policy to deal with the growing cyber threats faced by governments, businesses, and individuals.
The MIT Cybersecurity Policy Initiative (CPI) is one of three new academic initiatives to receive a total of $45 million in support through the Hewlett Foundation’s Cyber
Initiative. Simultaneous funding to MIT, Stanford University, and the University of California at Berkeley is intended to jump-start a new field of cyber policy research.
The idea is to generate a robust “marketplace of ideas” about how best to enhance the trustworthiness of computer systems while respecting individual privacy and free expression
rights, encouraging innovation, and supporting the broader public interest.

With the new awards, the Hewlett Foundation has now allocated $65 million over the next five years to strengthening cybersecurity, the largest-ever private commitment to
this nascent field. “Choices we are making today about Internet governance and security have profound implications for the future. To make those choices well, it is imperative
that they be made with a sense of what lies ahead and, still more important, of where we want to go,” says Larry Kramer, president of the Hewlett Foundation. “We view these grants
as providing seed capital to begin generating thoughtful options.” “I’ve had the pleasure of working closely with Larry Kramer throughout this process. His dedication and the
Hewlett Foundation’s remarkable generosity provide an opportunity for MIT to make a meaningful and lasting impact on cybersecurity policy,” MIT President L. Rafael Reif says.
“I am honored by the trust that the Foundation has placed in MIT and excited about the possibilities that lie ahead.”
Each of the three universities will take complementary approaches to addressing this challenge. MIT’s CPI will focus on establishing quantitative metrics and qualitative models
to help inform policymakers. Stanford’s Cyber-X Initiative will focus on the core themes of trustworthiness and governance of networks. And UC Berkeley’s Center for Internet
Security and Policy will be organized around assessing the possible range of future paths cybersecurity might take.

Interdisciplinary approach
The Institute-wide CPI will bring together scholars from three key disciplinary pillars: engineering, social science, and management. Engineering is vital to understanding
the architectural dynamics of the digital systems in which risk occurs. Social science can help explain institutional behavior and frame policy solutions, while management
scholars offer insight on practical approaches to institutionalize best practices in operations.
MIT has a strong record of applying interdisciplinary approaches to large-scale problems from energy to cancer. For example, the MIT Energy Initiative has brought together
faculty from across campus — including the social sciences — to conduct energy studies designed to inform future energy options and research. These studies include technology
policy reports focused on nuclear power, coal, natural gas, and the smart electric grid.
“We’re very good at understanding the system dynamics on the one hand, then translating that understanding into concrete insights and recommendations for policymakers.
And we’ll bring that expertise to the understanding of connected digital systems and cybersecurity. That’s our unique contribution to this challenge,” says Daniel Weitzner,
the principal investigator for the CPI and a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
Developing a more formal understanding of the security behavior of large-scale systems is a crucial foundation for sound public policy. As an analogy, Weitzner says, imagine
trying to shape environmental policy without any way of measuring carbon levels in the atmosphere and no science to assess the cost or effectiveness of carbon mitigation tools.
“This is the state of cybersecurity policy today: growing urgency, but no metrics and little science,” he says.
CSAIL is home to much of the technology that is at the core of cybersecurity, such as the RSA cryptography algorithm that protects most online financial transactions,
and the development of web standards via the MIT-based World Wide Web Consortium. “That gives us the ability to have our hands on the evolution of these technologies to learn
about how to make them more trustworthy,” says Weitzner, who was the United States deputy chief technology officer for Internet policy in the White House from 2011 to 2012,
while on leave from his longtime position at MIT.

First steps
In pioneering a new field of study, CPI’s first challenge is to identify key research questions, select appropriate methodologies to guide the work, and establish
patterns of cross-disciplinary collaboration. Research challenges include:

How policymakers should address security risks to personal health information;
How financial institutions can reduce risk by sharing threat intelligence;
Developing cybersecurity policy frameworks for autonomous vehicles like drones and self-driving cars; and
How to achieve regional and even global agreements on both privacy and security norms in online environments.
To address these issues, CPI will not only bring to bear different disciplines from across MIT — from computer science to management to political science — but also
engage with stakeholders outside the Institute, including government, industry, and civil society organizations. “We want to understand their challenges and work with
them on formulating solutions,” Weitzner said.
In addition to research, a contribution of the CPI in the long run will be to create a pipeline of students to serve as the next generation of leaders working at
this intersection of technology and public policy.
The mission of the William and Flora Hewlett Foundation is to “help people build measurably better lives.” The Foundation concentrates its resources on activities in
education, the environment, global development and population, performing arts, and philanthropy, as well as grants to support disadvantaged communities in the
San Francisco Bay Area.
The Foundation was established by the late William Hewlett with his wife, Flora Lamson Hewlett, and their eldest son, Walter B. Hewlett. William Hewlett, who earned an
SM degree in electrical engineering from MIT in 1936, was co-founder, with David Packard, of the Hewlett-Packard Company, a multinational information technology company.

Source from: http://newsoffice.mit.edu/

Best Light Laptop/UltraBook WINNER: Lenovo ThinkPad X1 Carbon (3rd Generation)

A familiar sight on traveling execs’ wish lists, Lenovo’s 14-inch ultrabook got a whole lot more desirable for 2014. In the refreshed X1 Carbon, the advent of Intel
4th-generation Core “Haswell” power, Lenovo claims, will boost battery life from about five to about nine hours. Plus, the physical redesign made the laptop 2mm thinner and
trimmed its weight to 2.8 pounds. Its New Year’s resolution is a super-sharp 2,560x1,440 pixels in some of the upper-shelf configurations. (Base models will still show 1,600x900.)
The biggest change you will see, though, is north of the keyboard row. The usual function keys are gone; to show off, a row of electroluminescent “adaptive keys” change from
ho-hum function keys to multiple other modes, depending on the active application. Plus, you can lay the X1 flat, opening the screen all the way, and have the onscreen
image flip a 180 to orient itself to someone across the table.

Source from: http://www.computershopper.com/

New Windows 10 leak (build 9901) highlights more polished Cortana and new apps

As Microsoft prepares to detail new features coming to Windows 10 at a January press event, pre-consumer builds have already given us some
glimpses at the next-gen OS. The latest, build 9901, includes a more refined design for Cortana and a number of other tweaks and new apps.
Microsoft’s answer to Siri and Google Now first appeared in a video demo earlier this month, but that particular build had a pretty barebones UI.
The most recent implementation looks far more polished and is likely much closer to what it will look like in Windows 10. Namely, Cortana sits at
the top of the search interface for Windows 10, which itself has a new home in the task bar.
Microsoft’s assistant will respond to text and voice commands and overall it works very similar to the Windows Phone version, with the ability
to summon it with the words “Hey Cortana”. Needless to say it’s not 100% functional yet.
There are several new and updated Modern UI apps such as Camera, Calculator, Alarms, Remind Me, Photos, Contact Support and a Getting Started app.
Also in build 9901 is a new Xbox app that appears to work as a central hub for achievements, friends lists, activity feeds, and the Store.
Paul Thurrott has a detailed changelog of the new build over at WinSupersite, while the video above from WinBeta shows some of the new features in action.

Source from: http://www.techspot.com/