Internet of Things (IoT)

The “Internet of things” (IoT) is becoming an increasingly growing topic of conversation both in the workplace and outside of it. It’s a concept that not only has the potential to impact how we live but also how we work. But what exactly is the “Internet of things” and what impact is it going to have on you, if any? There are a lot of complexities around the “Internet of things” but I want to stick to the basics. Lots of technical and policy-related conversations are being had but many people are still just trying to grasp the foundation of what the heck these conversations are about.

The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.

A thing, in the Internet of Things, can be a person with a heart monitor implant, a farm animal with a biochip transponder, an automobile that has built-in sensors to alert the driver when tire pressure is low — or any other natural or man-made object that can be assigned an IP address and provided with the ability to transfer data over a network.

IoT has evolved from the convergence of wireless technologies, micro-electromechanical systems (MEMS), microservices and the internet. The convergence has helped tear down the silo walls between operational technology (OT) and information technology (IT), allowing unstructured machine-generated data to be analyzed for insights that will drive improvements.

Kevin Ashton, cofounder and executive director of the Auto-ID Center at MIT, first mentioned the Internet of Things in a presentation he made to Procter & Gamble in 1999. Here’s how Ashton explains the potential of the Internet of Things.

“Today computers — and, therefore, the internet — are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024 terabytes) of data available on the internet were first captured and created by human beings by typing, pressing a record button, taking a digital picture or scanning a bar code.

The problem is, people have limited time, attention and accuracy — all of which means they are not very good at capturing data about things in the real world. If we had computers that knew everything there was to know about things — using data they gathered without any help from us — we would be able to track and count everything and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling and whether they were fresh or past their best.”

IPv6’s huge increase in address space is an important factor in the development of the Internet of Things. According to Steve Leibson, who identifies himself as “occasional docent at the Computer History Museum,” the address space expansion means that we could “assign an IPV6 address to every atom on the surface of the earth, and still have enough addresses left to do another 100+ earths.” In other words, humans could easily assign an IP address to every “thing” on the planet. An increase in the number of smart nodes, as well as the amount of upstream data the nodes generate, is expected to raise new concerns about data privacy, data sovereignty and security.

Practical applications of IoT technology can be found in many industries today, including precision agriculture, building management, healthcare, energy and transportation. Connectivity options for electronics engineers and application developers working on products and systems for the Internet of Things include:

Although the concept wasn’t named until 1999, the Internet of Things has been in development for decades. The first internet appliance, for example, was a Coke machine at Carnegie Melon University in the early 1980s. The programmers could connect to the machine over the internet, check the status of the machine and determine whether or not there would be a cold drink awaiting them, should they decide to make the trip down to the machine.

Dr. John Barrett explains the Internet of Things in his TED talk


Let’s start with understanding a few things.

Broadband Internet is become more widely available, the cost of connecting is decreasing, more devices are being created with Wi-Fi capabilities and sensors built into them, technology costs are going down, and smartphone penetration is sky-rocketing.  All of these things are creating a “perfect storm” for the IoT.

So What Is The Internet Of Things?

Simply put, this is the concept of basically connecting any device with an on and off switch to the Internet (and/or to each other). This includes everything from cellphones, coffee makers, washing machines, headphones, lamps, wearable devices and almost anything else you can think of.  This also applies to components of machines, for example a jet engine of an airplane or the drill of an oil rig. As I mentioned, if it has an on and off switch then chances are it can be a part of the IoT.  The analyst firm Gartner says that by 2020 there will be over 26 billion connected devices… That’s a lot of connections (some even estimate this number to be much higher, over 100 billion).  The IoT is a giant network of connected “things” (which also includes people).  The relationship will be between people-people, people-things, and things-things.

How Does This Impact You?

The new rule for the future is going to be, “Anything that can be connected, will be connected.” But why on earth would you want so many connected devices talking to each other? There are many examples for what this might look like or what the potential value might be. Say for example you are on your way to a meeting; your car could have access to your calendar and already know the best route to take. If the traffic is heavy your car might send a text to the other party notifying them that you will be late. What if your alarm clock wakes up you at 6 a.m. and then notifies your coffee maker to start brewing coffee for you? What if your office equipment knew when it was running low on supplies and automatically re-ordered more?  What if the wearable device you used in the workplace could tell you when and where you were most active and productive and shared that information with other devices that you used while working?

On a broader scale, the IoT can be applied to things like transportation networks: “smart cities” which can help us reduce waste and improve efficiency for things such as energy use; this helping us understand and improve how we work and live. Take a look at the visual below to see what something like that can look like.

The reality is that the IoT allows for virtually endless opportunities and connections to take place, many of which we can’t even think of or fully understand the impact of today. It’s not hard to see how and why the IoT is such a hot topic today; it certainly opens the door to a lot of opportunities but also to many challenges. Security is a big issue that is oftentimes brought up. With billions of devices being connected together, what can people do to make sure that their information stays secure? Will someone be able to hack into your toaster and thereby get access to your entire network? The IoT also opens up companies all over the world to more security threats. Then we have the issue of privacy and data sharing. This is a hot-button topic even today, so one can only imagine how the conversation and concerns will escalate when we are talking about many billions of devices being connected. Another issue that many companies specifically are going to be faced with is around the massive amounts of data that all of these devices are going to produce. Companies need to figure out a way to store, track, analyze and make sense of the vast amounts of data that will be generated.

So what now?

Conversations about the IoT are (and have been for several years) taking place all over the world as we seek to understand how this will impact our lives. We are also trying to understand what the many opportunities and challenges are going to be as more and more devices start to join the IoT. For now the best thing that we can do is educate ourselves about what the IoT is and the potential impacts that can be seen on how we work and live.

5G Technology

5th generation mobile networks or 5th generation wireless systems, abbreviated 5G, are the proposed next telecommunications standards beyond the current 4G/IMT-Advanced standards.

An initial chip design by Qualcomm in October 2016, the Snapdragon X50 5G modem, supports operations in the 28 GHz band, also known as millimetre wave (mmW) spectrum. With 800 MHz bandwidth support, it is designed to support peak download speeds of up to 35.46 gigabits per second.

5G planning aims at higher capacity than current 4G, allowing a higher density of mobile broadband users, and supporting device-to-device, ultra reliable, and massive machine communications.

5G research and development also aims at lower latency than 4G equipment and lower battery consumption, for better implementation of the Internet of things.

There is currently no standard for 5G deployments.

The Next Generation Mobile Networks Alliance defines the following requirements that a 5G standard should fulfill:

  • Data rates of tens of megabits per second for tens of thousands of users
  • Data rates of 100 megabits per second for metropolitan areas
  • 1 Gb per second simultaneously to many workers on the same office floor
  • Several hundreds of thousands of simultaneous connections for wireless sensors
  • Spectral efficiency significantly enhanced compared to 4G
  • Coverage improved
  • Signalling efficiency enhanced
  • Latency reduced significantly compared to LTE.

While 5G isn’t expected until 2020, an increasing number of companies are investing now to prepare for the new mobile wireless standard. We explore 5G, how it works and its impact on future wireless systems.

5G simply stands for fifth generation and refers to the next and newest mobile wireless standard based on the IEEE 802.11ac standard of broadband technology, although a formal standard for 5G is yet to be set.

According to the Next Generation Mobile Network’s 5G white paper, 5G connections must be based on ‘user experience, system performance, enhanced services, business models and management & operations’.

And according to the Groupe Speciale Mobile Association (GSMA) to qualify for a 5G a connection should meet most of these eight criteria:

  1. One to 10Gbps connections to end points in the field
  2. One millisecond end-to-end round trip delay
  3. 1000x bandwidth per unit area
  4. 10 to 100x number of connected devices
  5. (Perception of) 99.999 percent availability
  6. (Perception of) 100 percent coverage
  7. 90 percent reduction in network energy usage
  8. Up to ten-year battery life for low power, machine-type devices

Previous generations like 3G were a breakthrough in communications. 3G receives a signal from the nearest phone tower and is used for phone calls, messaging and data.

4G works the same as 3G but with a faster internet connection and a lower latency (the time between cause and effect).

Hubert Da Costa, Vice President, EMEA at Cradlepoint said: “5G Wi-Fi connections are set to be about three times faster than 4G, starting with 450Mbps in single-stream, 900 Mbps (dual- stream) and 1.3G bps (three-stream). So, whilst we are already starting to see a huge growth in IoT and smart devices, 5G’s speed and capacity will enable an even more rapid arrival of this connected future.”

Advantages and disadvantages of 5G

5G will be significantly faster than 4G, allowing for higher productivity across all capable devices with a theoretical download speed of 10,000 Mbps. Plus, with greater bandwidth comes faster download speeds and the ability to run more complex mobile internet apps.

However, 5G will cost more to implement and while the newest mobile phones will probably have it integrated, other handsets could be deemed out of date.

A reliable, wireless internet connection can depend on the number of devices connected to one channel. With the addition of 5G to the wireless spectrum, this could put us at risk of overcrowding the frequency range.

“Current 4G mobile standards have the potential to provide 100s of Mbps. 5G offers to take that into multi-gigabits per second, giving rise to the ‘Gigabit Smartphone’ and hopefully a slew of innovative services and applications that truly need the type of connectivity that only 5G can offer,” says Paul Gainham, senior director, SP Marketing EMEA at Juniper Networks.

The future of 5G

As 5G is still in development, it is not yet open for use by anyone. However, lots of companies have started creating 5G products and field testing them.

Notable advancements in 5G technologies have come from Nokia, Qualcomm, Samsung, Ericsson and BT, with growing numbers of companies forming 5G partnerships and pledging money to continue to research into 5G and its application.

Qualcomm and Samsung have focused their 5G efforts on hardware, with Qualcomm creating a 5G modem and Samsung producing a 5G enabled home router.

Both Nokia and Ericcson have created 5G platforms aimed at mobile carriers rather than consumers.  Ericsson created the first 5G platform earlier this year that claims to provide the first 5G radio system. Ericsson began 5G testing in 2015.

Similarly, earlier this year Nokia launched “5G First”, a platform aiming to provide end-to-end 5G support for mobile carriers.

“While the networking industry is working towards making 4G ubiquitous, we also need to future-proof for 5G, which probably won’t see deployment until 2019 or 2020 at the earliest. It will take that long as a completely new eco-system needs to form with the right architectures and agreed standards.

“In line with that, the mobile vendors will need to develop the network infrastructure and end user devices such as new 5G capable handsets. Ultimately, the biggest technological challenge confronting the industry will be spectrum availability,” says Paul Gainham, senior director, SP Marketing EMEA at Juniper Networks.

Cognitive Radio

Cognitive radio (CR) is a form of wireless communication in which a transceiver can intelligently detect which communication channels are in use and which are not, and instantly move into vacant channels while avoiding occupied ones. This optimizes the use of available radio-frequency (RF) spectrum while minimizing interference to other users.

In its most basic form, CR is a hybrid technology involving software defined radio (SDR) as applied to spread spectrum communications. Possible functions of cognitive radio include the ability of a transceiver to determine its geographic location, identify and authorize its user, encrypt or decrypt signals, sense neighboring wireless devices in operation, and adjust output power and modulation characteristics.

There are two main types of cognitive radio, full cognitive radio and spectrum-sensing cognitive radio. Full cognitive radio takes into account all parameters that a wireless node or network can be aware of. Spectrum-sensing cognitive radio is used to detect channels in the radio frequency spectrum.

The Federal Communications Commission (FCC) ruled in November 2008 that unused portions of the RF spectrum (known as white spaces) be made available for public use. White space devices must include technologies to prevent interference, such as spectrum sensing and geolocation capabilities.

The idea for CR was developed by Joseph Mitola at the Defense Advanced Research Projects Agency (DARPA) in the United States. Full cognitive radio is sometimes known as “Mitola radio.”

What Are Cognitive Radio Networks?

Cognitive (or smart) radio networks like xG’s xMaxsystem are an innovative approach to wireless engineering in which radios are designed with an unprecedented level of intelligence and agility. This advanced technology enables radio devices to use spectrum (i.e., radio frequencies) in entirely new and sophisticated ways. Cognitive radios have the ability to monitor, sense, and detect the conditions of their operating environment, and dynamically reconfigure their own characteristics to best match those conditions.

Using complex calculations, xMax cognitive radios can identify potential impairments to communications quality, like interference, path loss, shadowing and multipath fading. They can then adjust their transmitting parameters, such as power output, frequency, and modulation to ensure an optimized communications experience for users.

The following graphic shows how a cognitive radio network operates in relation to its environment:



Cognitive vs. Conventional

Conventional, or “dumb” radios, have been designed with the assumption that they were operating in a spectrum band that was free of interference. As a result, there was no requirement to endow these radios with the ability to dynamically change parameters, channels or spectrum bands in response to interference. Not surprisingly, these radios required pristine, dedicated (i.e., licensed) spectrum to operate.

By contrast, xMax cognitive radios have been engineered from the ground up to function in challenging conditions. Unlike their traditional counterparts, they can view their environment in great detail to identify spectrum that is not being used, and quickly tune to that frequency to transmit and/or receive signals. They also have the ability to instantly find other spectrum if interference is detected on the frequencies being used. In the case of xMax, it samples, detects and determines if interference has reached unacceptable levels up to 33 times a second.

The following image illustrates how xMax cognitive radios operate differently from conventional radios. It shows screen captures of spectrum analyzer readings taken from an xMax network tower in Ft Lauderdale, FL. The frequencies being measured are in the unlicensed 900 MHz ISM band. Because this spectrum is unlicensed (i.e., free of charge for anyone to use) it is used by hundreds, if not thousands of radios in the local area for applications like cordless phones, baby monitors, commercial video security systems, etc.

The figure at the left shows how a conventional radio would view this—as an environment having an unacceptable level of interference for communicating. The figure at the right shows what this same interference looks like to xMax. xMax is able to divide these frequencies into very small time segments (33 milliseconds) and find usable gaps where it can transmit its short and highly efficient signals—at moments when the spectrum is quiet.


xMax divides the 900 MHz spectrum block shown into 18 channels—giving it 18 opportunities (windows) every 33 milliseconds to find available spectrum.

In short, the xMax cognitive radio network sees windows of opportunity where other radios see walls of interference.

To reduce “thrashing” and unnecessary channel switching due to temporary and very short-lived interference phenomenon, or degraded network conditions (that do not cause a noticeable impact to performance or quality), actual channel and handovers decisions are made by trending multiple samples and measurements. The system only switches from its current channel when extreme levels of interference exceed its built-in interference mitigation capabilities. This enables xMax to use frequencies and find available bandwidth where other radios can only see static, yet its real-world tuned algorithms reduce signaling overhead and optimize throughput and quality.

Cognitive Radios Improve Spectrum Efficiency

The ability of xMax cognitive radios to make real-time autonomous decisions and dynamically change frequencies (referred to as dynamic spectrum access, or DSA) allows them to intelligently share spectrum and extract more bandwidth—which improves overall spectrum efficiency. It achieves this by “opportunistic use” of shared frequencies like unlicensed spectrum.

xMax cognitive radio technology was designed to be “frequency agnostic.” That is, its cognitive “Identify and Utilize” spectrum sensing technology can be used to power radios in any frequency band. This is beneficial since the FCC and wireless regulatory bodies around the world are in the process of opening up new spectrum, as well as reclassifying existing spectrum, to be made available for opportunistic use by cognitive radios.

This would allow new market entrants, utilities, public safety, enterprise and even existing wireless operators to offer new services, additional bandwidth and higher capacity without requiring these entities to purchase expensive and scarce wireless spectrum.

Taking Cognitive Radios Further: Interference Mitigation

Most of the research in the cognitive radio field to date has been limited to Dynamic Spectrum Access within the radio device. xG Technology has expanded the application of cognitive techniques beyond DSA in every radio used in the xMax system. xG is leveraging cognitive technology in several other aspects of the radio’s operation and across the entire xMax wireless network.

One of the breakthroughs xG has made that takes its xMax solutions beyond competitive cognitive radios is the addition of sophisticated and patent pending interference mitigation. These interference mitigation techniques allow xMax cognitive radios to increase their dwell time on a channel, even in the presence of interference that would cause traditional radios to fail. This increases the total spectrum bandwidth available for use by the xMax system compared to other radio systems, as well as improving the reliability of the xMax network in harsh RF conditions.

xMax cognitive radio networks are also incorporating MIMO antennas and advanced signal processing algorithms to withstand much higher levels of noise, jamming, and general interference than conventional radios and competing cognitive radio solutions.

Remembering Microsoft’s biggest blunder

Software giant Microsoft has finally said goodbye to the 10-year-old Windows Vista operating system, which debuted to severe criticism.

From April 11, 2017, Windows Vista customers will no longer receive security updates or online content updates from Microsoft.

If you continue to use it, your computer will still work but it might become more vulnerable to security risks and viruses.

According to Microsoft, “Internet Explorer 9 is no longer supported, so if your Windows Vista PC is connected to the Internet and you use Internet Explorer 9 to surf the web, you might be exposing your PC to additional threats. Also, as more software and hardware manufacturers continue optimizations for more recent versions of Windows, you can expect to non-compatibility of more apps and devices with Windows Vista.”

According to, only about 0.78 per cent of Windows users use Vista. Microsoft wants its users to purchase the latest OS, Windows 10.

Last year, Firefox indicated it would support the OS until next September, while Google said Gmail would not work with it later in 2017.

Those using Windows 7 can breathe a sigh a relief, as Microsoft won’t end security updates for the Windows 7 PC until January 14, 2020. And support for Windows 8 ends on January 9, 2018, while extended support is set to expire in 2023.

Meanwhile, Microsoft rolled out Windows 10 Creators Update to Windows 10 customers around the world for free.

Here’s a fact sheet from Microsoft:

As usual, the Twitterati took their turns to give the OS a send-off with their ‘tributes’.

RIP Windows Vista

Unsupported and forgotten

I don’t know why everyone is freaking out so much, but I run Windows Vista on my MacBook and it’s great!

R.I.P Windows Vista, hello Windows 10 Creators Update.

In news that affects an almost 0 number of people:

Windows Vista extended support ends tomorrow

Microsoft retired today. It is the first time retirement has occurred after public execution.


Microsoft rolls out free Windows 10 Creators Update

Microsoft on Wednesday rolled out Windows 10 Creators Update to Windows 10 customers around the world for free.

To get the update, users can enable automatic updates on Windows 10 PC and the Creators Update will be delivered through the phased rollout.

Advanced users can initiate the update manually though the “Update Assistant”.

“Microsoft’s mission has always been to lead new innovations that let people create their own path and bring their dreams and aspirations to life,” said Vineet Durani, Country Head, Microsoft India, in a statement.

Windows 10 Creators Update empowers users with experiences on 3D, mixed reality, 4K gaming and enhanced security features.

With this update, users can now avail a service to help monitor one’s security through the “Windows Defender Security Centre” that offers a single dashboard display so people can control their security options from one place.

Adobe Captivate

Global software giant Adobe on Wednesday unveiled Adobe Captivate, a latest version of its eLearning authoring tool and Adobe Captivate Prime – a learning management system (LMS) – to bolster the personalised learning experiences.

The two products are used in conjunction as end-to-end solution for specialists who want to deliver learning experiences that are personalised and can be delivered on any device.

“Adobe Captivate and Captivate Prime enable the creation and delivery of learning experiences that are personalised and available to employees on the device of their choice,” said Adil Munshi, Vice President (Print and Publishing Business) Adobe, in a statement.

The new version of Adobe Captivate Prime allows learning and development teams to deliver personalised learning experiences across multiple devices and manage both online and offline training more efficiently. Adobe Captivate allows eLearning designers to automatically transform desktop courses into mobile-optimised learning in just a few clicks. Users can take advantage of more than 75,000 free eLearning assets and create courses across devices.

Large Asteroid to hurtle past Earth on 19th A

It is a close pass for an object this size

A relatively large near-Earth asteroid will fly safely past our planet on April 19 at a distance of about 1.8 million kilometres — over four times the distance from Earth to the Moon, NASA said. Although there is no possibility for the asteroid to collide with Earth, this will be a very close approach for an asteroid of this size.

The asteroid, known as 2014 JO25, was discovered in May 2014 by astronomers at the Catalina Sky Survey in Arizona, US.

Contemporary measurements by NASA’s NEOWISE mission indicate that the asteroid is roughly 650 meters in size, and that its surface is about twice as reflective as that of the Moon.

At this time very little else is known about the object’s physical properties, even though its trajectory is well known. The asteroid will approach Earth from the direction of the Sun and will become visible in the night sky after April 19. It is predicted to brighten to about magnitude 11, when it could be visible in small optical telescopes for one or two nights before it fades as the distance from Earth rapidly increases, NASA said.

Small asteroids pass within this distance of Earth several times each week, but the upcoming close approach is the closest by any known asteroid of this size, or larger, since asteroid Toutatis, a five-kilometre asteroid, which approached within about four lunar distances in 2004.

The next known encounter of an asteroid of comparable size will occur in 2027 when the 800-metre-wide asteroid 1999 AN10 will fly by at one lunar distance, about 380,000 kilometres.

The April 19 encounter provides an outstanding opportunity to study this asteroid, and astronomers plan to observe it with telescopes around the world to learn as much about it as possible.

The encounter on April 19 is the closest this asteroid has come to Earth for at least the last 400 years and will be its closest approach for at least the next 500 years.

Also on April 19, the comet PanSTARRS (C/2015 ER61) will make its closest approach to Earth, at a very safe distance of 175 million kilometres, NASA said.

A faint fuzzball in the sky was discovered in 2015 by the Pan-STARRS NEO survey team using a telescope on the summit of Haleakala, Hawaii.

The comet has since brightened considerably due to a recent outburst and is now visible in the dawn sky with binoculars or a small telescope.

Why do newly bought phones have to be charged for seven to eight hours at a stretch?

Nowadays, smartphones come with lithium ion (Li-ion) batteries with partial charge that can get fully charged within about 2 h.

However, manufacturers still insist on charging them for 8 hours before the first use. This is probably because the charge levels indicated when a new gadget is first switched on, may not indicate the true level of charge in the battery. If the actual charge is very low, when you switch on the new phone and start installing apps it may get switched off in between. The manufacturer does not want the customer to have a bad first impression!

 This instruction has its origin when nickel-cadmium (Ni-Cd) batteries, containing nickel oxyhydroxide (NiOOH) as positive electrode and cadmium as negative electrode, were used. These batteries are known to have a memory effect. In other words, the battery seems to memorise the discharge voltage and the depth of discharge of the previous cycling. This effect leads to the progressive loss of practical cell capacity at a fixed cutoff voltage and hence leads to large wrong estimation of the state of charge of the cell

Google unveils ‘YouTube Go’

Google on Tuesday rolled out the beta version of its new ‘YouTube Go’ application for India, first unveiled in September last year. The application promises to give a better experience of watching videos on a slower network.

“Today, after months of expanded testing and refinement, we’re happy to announce that we’re making the beta version of YouTube Go available for download on the Google Play Store in India,” the firm said in a statement.

It added that the application had been designed to be offline first and to improve the experience of watching videos on a slower network. Users can also keep a tab on data used for streaming or saving videos.

The new application will also enable quick sharing of videos with friends nearby. Other features include showing trending and popular videos in user’s area on the home screen, providing preview of videos.

‘Grassoline’ may power future flights

In the quest of more sustainable energy sources, scientists have developed ‘grassoline’ — a biofuel derived from grass that could one day power aircraft. Researchers investigated methods that can disintegrate and treat grass until it can be used as a fuel. “Due to its vast abundance, grass is the perfect source of energy,” said Way Cern Khor from Ghent University in Belgium. “Right now the amount of biofuel that can be made from grass is still limited to a few drops. The current process is very expensive, and engines should be adapted to this new kind of fuel,” researchers said. “If we can keep working on optimising this process in cooperation with the business world, we can come down on the price. And maybe in a few years we can all fly on grass!” Khor said.