Quiet that Ringing in the Brain

A new drug that selectively affects potassium channels in the brain may offer effective treatment for epilepsy and prevent tinnitus, UConn neurophysiologist Anastasios Tzingounis and colleagues report in the June 10 Journal of Neuroscience.

Epilepsy and tinnitus are both caused by overly excitable nerve cells. Healthy nerves have a built-in system that slams on the brakes when they get too excited. But in some people this braking system doesn’t work, and the nerves run amok, signaling so much that the brain gets overloaded and has a seizure (epilepsy) or hears phantom ringing (tinnitus). About 65 million people worldwide are affected by epilepsy. The numbers on tinnitus are not as clear-cut, but the American Tinnitus Association estimates 2 million people have tinnitus so disabling they have trouble functioning in daily life.

The existing drugs to treat epilepsy don’t always work, and can have serious side effects. One of the more effective, called retigabine, helps open KCNQ potassium channels, which are the “brakes” that shut down the signaling of overly excited nerves. Unfortunately, retigabine has significant adverse side effects, including sleepiness, dizziness, problems with urination and hearing, and an unnerving tendency to turn people’s skin and eyes blue. Because of this, it’s usually only given to adults who don’t get relief from other epilepsy drugs.

Tzingounis’s research focuses on KCNQ potassium channels and how they work. He became interested in the topic several years ago, when doctors around the world began reporting infants with severe, brain-damaging seizures. Genetic testing showed that the children with this problem had genetic differences in their KCNQ potassium channels. Most existing anti-seizure drugs don’t work for these children, and few physicians are willing to prescribe retigabine for babies because of its side effects.

A neurobiologist at the University of Pittsburgh, Thanos Tzounopoulos, who specializes in tinnitus and knew about Tzingounis’s work on potassium channels, contacted Tzingounis in 2013 and asked if he’d like to test out a new drug candidate. The drug, SF0034, was chemically identical to retigabine, except that it had an extra fluorine atom. A company called SciFluor had developed SF0034, and wanted to know whether the compound had promise against epilepsy and tinnitus. The two researchers thought the drug had the potential to be much better than retigabine, and began working together to test it.

The most important question to answer was whether SF0034 works on KCNQ potassium channels the same way retigabine does, and if so, was it better or worse than its parent compound?

KCNQ potassium channels are found in the initial segment of axons, long nerve fibers that reach out and almost, but don’t quite, touch other cells. The gap between the axon and the other cell is called a synapse. When the cell wants to signal to the axon, it floods the synapse with sodium ions to create an electrical potential. When that electrical potential goes on too long, or gets out of hand, the KCNQ potassium channel kicks in. It opens, potassium ions flood out, and the sodium-induced electrical potential shuts down.

In some types of epilepsy, the KCNQ potassium channels have trouble opening and shutting down runaway electrical potentials in the nerve synapse. Retigabine helps them open.

There are five different kinds of KCNQ potassium channels in the body, but only two are important in epilepsy and tinnitus: KCNQ2 and KCNQ3. The problem with retigabine is that it acts on other KCNQ potassium channels as well, and that’s why it has so many unwanted side effects.

Tzingounis and Tzounopoulos first tested SF0034 in neurons, and found that it was more selective than retigabine. It seemed to open only KCNQ2 and KCNQ3 potassium channels, not affecting KCNQ 4 or 5. It was more effective than retigabine at preventing seizures in animals, and it was also less toxic.The results are promising, both for research and for medicine. SciFluor now plans to start FDA trials with SF0034, to see whether it is safe and effective in people. Treating epilepsy is the primary goal, but tinnitus can be similarly debilitating, and sufferers would welcome a decent treatment.

Tzingounis is pleased as well. “This [SF0034] gives me another tool, and a better tool, to dissect the function of these channels,” he says. “We need to find solutions for kids – and adults – with this problem.”

Apple Watch enterprise ecosystem gains policy control

IDC estimates 111.9 million wearable devices will be in use by 2018, so its inevitable employees will drive enterprises adoption of these things, just as they drove enterprise use of iPhones, iPads and other mobile devices. 51 per cent of business leaders “identify wearables as a critical, high, or moderate priority for their organization,” says Forrester Research.

Also read: WWDC: 21 all-new Apple Watch features

To help secure these devices, Good Technology has introduced and updated secure email and collaboration app, Good Work for Apple Watch, which provides secure access to emails and meeting notes.

Productivity and security

To see the way ahead consider the enterprise significance of the latest Apple patents in which it describes a means to share Apple Watch files by shaking hands. It is inevitable these things will be used for all kinds of collaboration in unified communications or SaaS deployments. However, to support such use IT pros will want to implement complete policy control over the device – they have to, if only to satisfy data protection law.

“Enabling enterprise mobility means securing data accessed and used on all devices, whether smartphones, tablets or wearables,” said Christy Wyatt, chairman and CEO of Good Technology, “allowing greater productivity for employees while also providing complete policy controls for IT.” To enable this the company has updated the policy controls it places inside its Secure Mobility Platform for IT.

Among other features IT admins can enable/disable notifications and the Good Work watch app using a Web-based management console. Good also supports the new Apple Watch wrist detection restriction – which should enable enterprises to implement tighter MDM policy.

Intelligent agents

This is only the beginning of the emergence of Apple Watch in the enterprise – huge investments are already being made in this and there is a growing ecosystem of solutions to empower the wearable enterprise. For example, Apple and IBM have begun introducing Apple Watch support to some of their jointly developed MobileFirst for iOS apps. That is by no means the only example of enterprise class Apple Watch solutions.

Many mistake the Apple Watch as an object defined only by its existing features, but that’s a terrible error of judgement. It is important to understand that the impact of these devices is not determined by the features they offer in isolation, but what they offer when used in conjunction with back end systems, as Forrester analyst, J.P. Gownder explains, “Solving business problems depends on linking wearable devices to back-end systems. And the usability of wearables in turn depends upon intelligent agents.”

Digital transformations

These technologies will be critical to the digital transformation of everything Apple is already deeply enmeshed within.

“The market for company-provided wearables will be larger than the consumer market in the next five years,” writes Gownder. “Want proof? In 2012, in the US alone, 7.7 million people worked in healthcare positions that could benefit from wearables, while 3.2 million worked in public safety, and a whopping 13.8 million worked in sales roles. Not all of these professionals will adopt wearables, but their companies have every incentive to deploy wearable technologies and business processes that create positive financial and/or customer service results for customers.”

With iOS already the dominant mobile platform in any serious enterprise and a host of developers active in the space, Apple Watch seems set to seize its time.

Healthcare players are actively blocking data sharing

CHICAGO — Five years ago, only 20% of physicians used electronic medical records (EMRs). Today, 80% use them.

Since the enactment of the HITECH Act, which required that EMRs be adopted across all healthcare providers, the federal government has invested more than $28 billion toward their use.

And, yet, EMR data sharing between disparate vendor platforms, geographically dispersed facilities and unassociated medical institutions remains at a virtual standstill.

Experts at the Healthcare Information Management Systems Society (HIMSS) conference here this week said the industry knows the problem isn’t a technological one; it’s about the money. By keeping their software proprietary and unable to exchange data, or by actively blocking the use of protocols that would otherwise allow it, vendors can corner their respective markets.

Cris Ross, CIO at the Mayo Clinic, said healthcare interoperability is not a “crisis,” it’s more like a “perpetual rainy day.”

Hospital departments are frustrated because they can’t get laboratory reports on time, they can’t get radiological images or they don’t get complete records.

“We have patients who show up today literally with banker boxes full of paper. And, you know, the job gets done,” he said. “We’re sort of gutting it out.”

About 30% of healthcare expenditures are wasted because the industry isn’t following best practices and because of duplication of efforts, Ross said.

Shahid Shah, CEO of research firm Netspective Communications, said that even Congress is aware of the problem after a report last week laid blame at the feet of EMR providers and healthcare institutions.

“The report basically in summary said that there are some folks in the healthcare value chain that are actively blocking the sharing of data,” Shah said. “This was completely obvious…. Everybody knew that active blockers are there, and we know many of them here,” Shah said. “Unless they’re named, hopefully this is step one…. That’s when we get to reality. Until we get to reality, we can’t solve the problem.”

Last week, the Office of the National Coordinator for Health Information Technology (ONC) reported to Congress that despite health information exchange technology being fully baked, data is not being shared among providers.

“Current economic and market conditions create business incentives for some persons and entities to exercise control over electronic health information in ways that unreasonably limit its availability and use,” the ONC report said.

The ONC went on to state, “some persons and entities are interfering with the exchange or use of electronic health information in ways that frustrate the goals of the HITECH Act and undermine broader health care reforms.”

Being able to exchange healthcare data promises far greater benefits than just the convenience of data mobility. Patient data can be anonymized and used in accelerating scientific research and tracking health trends.

Today, Karen DeSalvo, national coordinator for health information technology, announced new efforts to curtail data blocking, saying the ONC’s focus will be on interoperability.

Karen DeSalvo Lucas Mearian
Karen DeSalvo, national coordinator for health information technology (right), announced new efforts to curtail data blocking by industry players. To her left, from L-R are: Jodi Daniel, director of the ONC’s Office of Policy Planning; Ahmed Haque, director of the ONC’s Office of Programs & Engagement; Lucia Savage, the ONC’s chief privacy officer; and Steve Posnack, director of the ONC’s Office of Standards and Technology Regulation.
In March, the ONC published its 2015 Edition Health IT Certification Criteria, a set of proposed rules for qualifying EMRs for use. Those rules will be under a public comment period until May 29.

Two weeks ago, the ONC released its proposed rules for “Stage 3” of Meaningful Use of EMRs, which focused on improving how electronic health information is shared and, ultimately, how care is delivered.

Steve Posnack, director of the ONC’s Office of Standards and Technology Regulation, said the rules will include surveillance of active blocking.

Posnack said that surveillance will come in two forms: Investigations into complaints and random sampling of EMR software to see if it’s baked into the product.

On a related note, the Department of Health and Human Services (HHS) today announced $1 million in new grant programs to help improve sharing of health information in rural and poor areas as well as for entities not covered by its EMS Meaningful Use rules, such as extended care facilities.

Jodi Daniel, director of the ONC’s Office of Policy Planning, said the agency will also focus on new medical vocabulary and content standards, and access to data for healthcare providers, patients and their care givers, which may include authorized friends and family.

DeSalvo warned that the ONC is neither an investigative agency nor an enforcement entity, but will instead work with other agencies, such as the Federal Trade Commission and Congress, who can impose fines and other penalties for organizations using unfair competitive practices.

“I think it would be fair to say it’s going to take action on the part of the federal government with existing administrative authorities of the private sector, including [EHR] providers and developers to set expectations to set contracts and to have more transparency,” DeSalvo said. “If necessary, there may be additional opportunities for Congress to weigh in.”

In January, the ONC released a roadmap detailing how it would address interoperability. But revelations that industry vendors and others are actively blocking data sharing may prompt the it to take a more proactive role.

Healthcare experts say technology and standards aren’t the problem. As Shah succinctly put it, “We have no healthcare interoperability crisis.”

Standards such as the Health 7 International’s (HL7) Fast Healthcare Interoperable Resource (FHIR) standard is seeing increased adoption among providers for exchanging patient information.

FHIR (pronounced “fire”) is growing in popularity because of its simplicity and ease of use. It’s based on RESTful APIs, using the Internet’s HTTP protocol and other familiar web specifications such as XML and JSON. It also natively supports leading privacy and security specifications.

Other health information exchange specifications include the Direct Project, a simple, secure, standards-based method for healthcare to share data directly to known, trusted recipients over the Internet. And CONNECT, which is open source software, uses the Nationwide Health Information Network (NHIN) standards and governance to make sure that health information exchanges set up by the government are compatible with other exchanges in the U.S.

“In terms of options, there are many,” said Venk Reddy, senior director of Connected Health at Walgreens. “Walgreens supports Connect, Direct, and soon FHIR.”

Even health insurance giant Humana’s CEO, Bruce Broussard, told a packed auditorium of health IT technologists today that they are not the answer.

“Take the technology we have, and all the things we know to do and take the necessary steps,” Broussard said. “Interoperability is the opportunity for us to act like a team.”

Broussard illustrated how other industries went through their own interoperability transitions, and while painful and arduous, the end result was well worth it.

Broussard pointed to the financial services industry, and providers such as Charles Schwab, which became the first firm to offer competitors’ products.

“Today, it would be unheard of not to offer other products,” he said.

Death to Faxes

Death to faxes. There, I said it.

Nearly every medical organization in this country still uses fax machines. This vintage, 1960s technology was replaced long ago in some industries. But many practices still send dozens or even hundreds of faxes a day. It is familiar, reliable technology.

Unfortunately, the fax machine is also a major source of HIPAA breaches, particularly breaches of a single record. It is all too easy for a provider to make a simple mistake while entering a phone number, and there is a chance that the fax will connect with another machine–the wrong machine.

If you are faxing Protected Health Information (PHI), you have just breached the patient’s record. The law requires you to inform the patient by letter and report the breach to HHS, at the end of the year.

This is not some theoretical problem. A staff member at 4Medapproved has a fax machine in his home office. Last year, he came home to discover another man’s pathology report for prostate cancer waiting in his fax tray. When he called the doctor’s office to report the mistake, they did not seem to take the breach very seriously, as if they’re always faxing records to the wrong numbers.

I suspect the patient whose privacy was violated would have taken it more seriously. But it did not sound as if the practice was going to inform him, in
violation of the law.

Apart from the problem of wrong numbers, faxes are obsolete, unsecure technology. We really shouldn’t be using them at all in healthcare.

HIPAA does not require faxes to be encrypted, because there is an increasingly artificial divide in HIPAA between analog and digital technology. Faxes are considered analog even though these days they are surely traveling over digital networks. The point is that voice conversations and faxes do not have to be encrypted to be compliant. Yet, faxes could easily be intercepted and deciphered.

The risk only grows after the fax arrives. Most fax machines are set to print upon receipt, which means that anyone can access the PHI after it has printed. There is no way to authenticate access by the recipient.

Faxes are a breach waiting to happen.

Now, they can be made safer, to some extent. A colleague in IT told me recently that they had set a practice’s incoming faxes to encrypt upon arrival. The recipient has to log in to view the fax. Thus, they could control and track access.

There also are online faxing services that enable encrypted, tracked faxing. But the ones I have seen are essentially encrypted email portals. They are really fax “simulators” more than anything else.

But even if faxes could be made secure, they would still be absurd.

The patient information being sent by fax was probably in electronic form originally. The fax essentially converts that electronic data into paper form. In all likelihood, after that paper record arrives, someone will have to type the information into the EHR. By hand. Surely this is madness!

Yet every time I visit a doctor’s office, I see front office staff transcribing information from paper into the EHR.

It is already relatively easy to send encrypted email, whether through Office 365 or Google Business Apps or one of the many other HIPAA-compliant email providers. If your EHR has a 2014 certification, it can send the data as a C-CDA that machines can read as structured data.

And that’s apart from the more sophisticated forms of HIE that are now available in most states.

I know that interoperability has a long way to go. It should be easier for providers to send PHI as secure data. But I also believe that habitual faxing is making adoption slower than it need be.

Maybe it’s not quite time to take your fax machine out into a field and hit it with a baseball bat. Not yet. But I do think practices should commit to using secure communications whenever possible. The fax machine should be pushed into some corner of shame, to be used only as a last resort. The sooner fax machines go the way of the dodo, the better it will be for us all.

HIPAA Requires Access to Health Records

Healthcare providers may not be aware that HIPAA requires access to health records, in addition to protecting data from breaches. Remember that the HIPAA Security Rule is designed to protect the Confidentiality, Integrity, and Availability (CIA) of health information. When we think of HIPAA, we usually think about confidentiality and pay little attention to access. This oversight could be costly for providers.

Unfortunately, healthcare is a perfect target for ransomware, which is designed to deny access to data. Ransomware works by secretly encrypting data, making it unreadable by the provider. To regain access to the data, the provider must pay hackers for a password to unlock the data.

It’s a bit like coming home to discover that thieves have changed all the locks on your house. The thieves taunt you from your roof: If you want the new keys, you’ve got to give them all your cash.

Of course, in the real world, you would simply call the police, or possibly throw rocks. But in the world of cybercrime, the thieves are somewhere in Ukraine or Nigeria, and instead of cash, they demand Bitcoin, which cannot be traced.

Sadly, for healthcare providers, the situation is even worse, because losing access to health records is a HIPAA violation. It does not matter that the provider was the victim of a cybercriminal. The provider has the responsibility to maintain access to those records, and federal regulations allow no excuses for failure.

So it’s like the thieves change your locks and run off with your cash, but when the police show up, they arrest you!

The bad news is that ransomware attacks are only increasing, and many new forms of ransomware are appearing. A couple of years ago, a nasty bit of ransomware called CryptoLocker made international news. Now that CryptoLocker has been tamed, new ransomware such as CryptoWall is proliferating through cyberspace.

So what can be done? The good news is that the best defense against ransomware is not sophisticated software or IT support. Rather, your best defense is HIPAA training and awareness. Ransomware usually infects computers through phishing email attacks. In other words, a staff member receives a deceptive email that tricks them into clicking on a link or attachment, and ransomware infects the network.

Basic training on data security can thwart most phishing attacks, because savvy computer users do not click on links or attachments in emails from sources they do not recognize and trust. Considering that regular training on health privacy is a core HIPAA requirement anyway, ensuring that all staff have completed training on at least an annual schedule is a no-brainer–it is important for compliance, and it protects your practice.

Good cyber-defenses also play a role. To be sure, every practice should have a robust firewall and anti-malware protection in place. These are also HIPAA requirements. Strong security software can detect and quarantine malware before it corrupts every computer on the network.

Many providers would also benefit by moving to the cloud. The cloud allows for economies of scale, so dedicated security experts that would never otherwise be available to help an individual practice can intervene when malware strikes. Moreover, cloud services can close the window on mischief by simply dumping the data of local computers that have been corrupted. And the cloud can be strict about applications, allowing only authorized programs to run, rather than trying to play catch-up after the damage has begun.

Many providers remain easy targets for ransomware attacks, and they may not realize that falling prey could expose them to the double-whammy of cybercrime and government penalties. But training and diligence can prevent disaster before it strikes.

OS X 10.11

What We Expect

The next major update to Apple’s OS X operating system, OS X 10.11, is expected to be previewed this June, at Apple’s annual Worldwide Developers Conference. We have a limited amount of information on OS X 10.11, but given that OS X 10.10 Yosemite just introduced a major design change, it’s likely OS X 10.11 will continue to offer the same general design, perhaps with under-the-hood improvements and new features.

retinamacbookyosemite-800x469

The new design introduced with OS X YosemiteAccording to rumors, OS X 10.11 will focus heavily on bug fixes, optimization improvements, and security enhancements, much like iOS 9.

Specifically, Apple is rumored to be working on a new kernel-level security system called “rootless” that will help curb malware and protect sensitive data by preventing users from accessing protected files on their Macs.

Apple may also convert many IMAP-based applications like Notes, Reminders, and Calendar to its own iCloud Drive system, improving communication in these apps between devices and increasing security. A “Trusted Wi-Fi” feature may allow Macs and iOS devices to connect to trusted wireless routers with no additional security measures, while non-trusted routers would have a heavily encrypted wireless connection.

There will also be a few new consumer-facing features included in OS X 10.11. The Maps app may be updated with support for transit directions, and there are rumors suggesting the operating system will gain a new default font — San Francisco, the same font used for the Apple Watch. OS X 10.11 may also include a Control Center that was originally a feature rumored for OS X Yosemite. The Control Center would include music controls and other features similar to the Control Center on iOS, like access to Do Not Disturb, Wi-Fi, and Bluetooth.

Potential Name

With OS X 10.9 Mavericks, Apple ceased naming its operating system updates after large cats and instead announced plans to name future updates after major California landmarks.

We don’t know what Apple will choose to call its next operating system update, but the company hastrademarked a long list of possible names that could be used for upcoming OS X updates. Names cover several major landmarks in California, ranging from surfing spots and popular cities to mountains and deserts. There are even a few iconic California animal names throw in, like Condor, Grizzly, and Redtail.

The full list of names: Redwood, Mammoth, California, Big Sur, Pacific, Diablo, Miramar, Rincon, El Cap, Redtail, Condor, Grizzly, Farallon, Tiburon, Monterey, Skyline, Shasta, Sierra, Mojave, Sequoia, Ventura, and Sonoma.

Thus far, we’ve had OS X 10.9 Mavericks and OS X 10.10 Yosemite, one name focusing on a water-based location and another focused on a forest-based location. Apple may be picking names randomly, but it’s also possible the company will alternate between names that relate to water and names that relate to land.

montereycalifornia-800x600

Photo of Monterey, California, one of the potential names for OS X 10.11 or future versions of OS XIf that’s the case, we could potentially get another one of the ocean-oriented names, like Pacific, Monterey, Farallon, or Rincon, but it’s not clear if Apple’s following a specific naming scheme. There’s also the possibility that the company has other secret trademarks or trademarks it has not applied for protection on at the current time, meaning a name not even on the list could be chosen for OS X 10.11.

We’ve polled our forum members to find the names people preferred out of Apple’s trademarked list, and OS X Redwood came in first, followed by OS X Mojave and OS X Sequoia.

Discuss OS X 10.11

We may not know what OS X 10.11 will offer, but that hasn’t stopped our forum members from listing what they’d like to see in the next operating system update.

Many of our forum members have said they’d love to see Apple focus on speed optimizations and bug fixes rather than new features, but some requests include a smarter Spotlight window, Siri integration, a better Dark Mode, and an expansion of the Continuity features first introduced with Yosemite.

Want to share what you’d like to see in OS X 10.11? Join in on the discussion.

Testing

The number of visits we see to MacRumors from Apple IP addresses running pre-release software often gives us hints as to how development is progressing on upcoming updates.

os_x_10_11_visits

Increasing visits to MacRumors.com from devices running OS X 10.11 from Apple’s networksVisits we’re receiving from devices running OS X 10.11 remain relatively low in the range of dozens per day, but we have seen visits picking up since the start of the new year, suggesting testing is well underway, as it should be as we head toward an initial unveiling and developer seeding in the coming months.

We expect to see the number of visits from machines running OS X 10.11 pick up as we creep closer to June. Apple will likely begin distributing the operating system internally to additional employees in the coming weeks to prepare for a preview at WWDC.

Release Date

Apple previews each new version of OS X and iOS at its Worldwide Developers Conference, so we will likely get our first look at OS X 10.11 on June 8, when the company holds its WWDC keynote event.

After the keynote introduction, developers will be given access to OS X 10.11 for testing purposes, and following an extended beta testing period, OS X 10.11 will most likely see a public release in the fall of 2015. Apple’s been providing public beta testers with new versions of OS X, so testers may receive OS X 10.11 well ahead of a public launch.

Apple to Discontinue Newsstand

Apple is planning to do away with Newsstand, its central app that stores newspaper and magazine subscriptions for users, according to sources who spoke with Re/code. In its place, the company will introduce a new Flipboard-style aggregation experience that will showcase curated lists of articles and content for individual customers. The partners for the new app will include ESPN, The New York Times, Conde Nast and Hearst, with the new app focused on providing “samples” of content.

newsstand-ios-7-800x600

Since magazines and newspapers were required to be located within the Newsstand app, many of Apple’s partners complained of buried content with the introduction of Newsstand. With the new structure in place, individual magazines and publications will sell their own app experiences within the App Store, allowing companies to push their content directly to a user’s device without having to navigate through Apple’s Newsstand app. While Apple is said to be adjusting its revenue cut for some types of subscription content, the company will reportedly continue to take its traditional 30 percent revenue cut from subscriptions within these services currently available in Newsstand.

MacRumors had previously heard Apple was meeting with publishers about the upcoming discontinuation of Newsstand, but was unable to obtain corroborating information.

Those supporting Apple’s supposed Flipboard-like app will also keep 100 percent of the advertising they each sell within the app. In exchange, Apple will help its partners sell unsold inventory and take a cut of the profit of each sale at a rate that one of its publishing partners detailed as “very favorable.” Although not stated directly, Re/code alludes to the confirmation of the Newsstand rumor happening today during the company’s annual Worldwide Developers Conference.

How to properly restart the Explorer shell in Windows

Windows provides several secret ways to exit the Explorer shell. They can be useful when you make registry changes that affect Explorer or for shell developers when testing shell extensions. In case you didn’t know them, today I am going to share them with you.

Why you may want to restart Explorer

There are several reasons when you may want to exit the Explorer shell and start it again, such as:

  1. You are trying to uninstall some software with shell extensions, e.g. WinRAR. If you leave Explorer, all shell extensions will be unloaded from the shell and will be cleanly deleted by the uninstaller. All files that are locked for use by the Explorer.exe process will be released.
  2. If you applied some tweak which requires you to log off and log in back, in most cases, it is enough to only restart the shell.

Let’s see how this can be done.

Method 1: Use the secret “Exit Explorer” context menu item of Taskbar or Start Menu

On Windows 8, press and hold Ctrl+Shift keys on your keyboard and right click on an empty area of the Taskbar. Viola, you just got access to a hidden context menu item: “Exit Explorer”.

ContextMenu

Windows 10 has a similar “Exit Explorer” option for the taskbar.
Windows-10-exit-explorer-taskbar-600x156

Additionally, it has the same command “Exit Explorer” in the context menu of the Start menu, as Windows 7 used to have:

  1. Open the Start menu in Windows 10.
  2. Press and hold Ctrl + Shift keys and right click the Start menu.
  3. The extra item will appear in the context menu, from there you can properly exit the Explorer shell:
    Exit-explorer-Windows-10-600x536

In Windows 7 and Vista, you can hold down Ctrl+Shift and right click on an empty area of the Start Menu to access “Exit Explorer”.

To start Explorer again, press Ctrl+Shift+Esc to start the Task Manager, and use File -> New task menu item in Task Manager. Type Explorer in the “Create New Task” dialog and press Enter.

 

Mouse pointer sticks on the edge when moving between multiple monitors

n Windows 8.1 Update 1, if you have multiple monitors, you may have observed a strange behavior of the mouse pointer. When you try to move the mouse pointer across to the other monitor, it sticks at the edge of the screen. If you move the mouse pointer fast, it goes over successfully to the other display. This is a not a bug, it’s a feature. Let’s see how to fix it.

This sticking of the mouse cursor on the right edge of monitor 1 and the left edge of monitor 2 (shared edge) is a feature to make the charms bar and scroll bars easier to use. Luckily you can disable it.

WARNING: Using Registry Editor incorrectly can cause serious, system-wide problems that may require you to re-install Windows to correct them. InCHIP IT cannot guarantee that any problems resulting from the use of Registry Editor can be solved. Use this tool at your own risk.

    1. Open Registry Editor
    2. Go to the following key:
      HKCU\Control Panel\Desktop
    3. Look for a DWORD value called MouseMonitorEscapeSpeed. If that value does not exist, then create it. Change its value data to 1.
      MouseMonitorEscapeSpeed
    4. Repeat steps #2 and #3 for
      HKCU\Software\Microsoft\Windows\CurrentVersion\ImmersiveShell\EdgeUI
    5. Now restart the Explorer.exe shell or restart Windows.

Introducing Fresco: A new image library for Android

Most of Facebook’s announcements at its F8 developer conference this week were iOS-centric, but today, the company also released three new open source tools for Android developers.

The first is a performance segmentation library called Year Class that is meant to help developers quickly figure out what kind of device a user is running. Thanks to this, a developer can quickly tune an app for an older device by turning on some advanced animations, for example, or enable fancier features for more modern phones. For the most part, the tools use CPU speed, as well as the number of available cores and RAM to determine the “year class” of a given device.

The second new tool, Network Connection Class, does something similar, but for network connections. Turns out, just knowing that a user is on an HSPA connection doesn’t actually tell you all that much about the actual network speed. According to Facebook, the speed of HSPA connection can vary by 5x between networks, for example.

Using this new tool, developers can get a better idea of the kind of speeds their users are getting on their networks and tune their apps accordingly. Unlike Year Class, though, this takes a little bit more coding to set up, and the tool obviously has to first gather some data before you can actually tune your app according to the actual network speeds the user is getting.

The third tool, Fresco, is a new image library for Android apps. The idea here is to ensure that apps don’t run out of memory when they load multiple images by being smarter about memory management (those GIFs can get huge, after all) and streaming images when possible.

The system also handles basic functions like displaying placeholders and image caching. You can find the technical details about how exactly this works here.