Chapter One. State of Art [302803]
Introduction
As technology has evolved massively in the past 50 years, [anonimizat]: Inter-Connectivity.
[anonimizat], ARPANET. [anonimizat]. This represented a [anonimizat]-[anonimizat]-to-end dedicated connections. Instead, [anonimizat], the essential building blocks of the communication. These packets would not need a [anonimizat], where the receiver would collect all the packets and rebuild the transmitter’s information. [anonimizat], as the lack of dedicated lines meant that each peer could communicate to the other simultaneously. It also paved the way to the formation of the TCP/[anonimizat] a usable, concrete form of reliable communication. More-so, [anonimizat], leading to the genesis of the modern internet.
The next big thing on the path of network evolution was when the ‘Internet of Things Global Standards Initiative’ had defined the ‘Internet of Things (IoT)’, in Recommendation ITU-T Y.2060 (06/2012) as a “[anonimizat] (physical and virtual) things based on existing and evolving interoperable information and communication technologies.” [anonimizat] a shift in the way networking is designed by deeming every object and tool in our possession capable of interconnectedness and intelligent features. [anonimizat], our terminal into the digital realm is no longer restricted to a [anonimizat] a smartphone, [anonimizat] a future of an increasingly ambiguous border between the real and the virtual.
It was this idea that drove me to a [anonimizat]. Information especially about one’s own well being should be easily accessible anywhere at any time so as to work in the interest of health in the most efficient way possible. [anonimizat] a [anonimizat]. This provides an intuitive platform to both parties which allows them to act as fast and effective as possible in order to offer the best medical care possible.
[anonimizat] a platform that is readily available and easy to use. [anonimizat], [anonimizat], to be further passed on to a central server. This hub will support a web portal from which both doctors and patients will be able to access their health history and medical information. This will allow the placement of the hub anywhere in the world, as the users will have digital access to it regardless of its position, and also adds to the redundancy of such a solution. At the same time, since data will be stored on a central cloud location, the patients and doctors will not be locked to a single device, rather any kind of computer that can open a browser can act as the peer, providing full freedom while also giving full access to information.
Chapter One. State of Art
It is a fairly public knowledge from the state of today’s healthcare information system which, to put it bluntly, is primitive and dangerously obsolete. Many institutions still rely on paper records in order archive and distribute information, making the entire process not only slow and highly prone to errors, but also very susceptible to environmental factors such as fires and calamities.
An entirely digital solution would allow the information to be safely backed up in multiple geographical sites, presenting not only rest of mind if something were to happen to the original site, but also the possibility to expand the system not only for backup, but to synchronize and offer immediate access to vastly distant locations, without sacrificing time and power.
The first steps towards this more connected, interoperable, seamlessly digital future have already been taken, as a 2013 survey from “Health Affairs” explains: between 2009 and 2013, more and more organizations started adopting an EHR (Electronic Health Record) system. To be precise, the adoption rate had reached around 78% of medical specialists and healthcare facilities. The same study however found that of these, only 14% also shared their data with entities outside their organization. This is a factor just as important, because once shared, that information can be cross-correlated, stacked with similar data from other sources, fed into machine learning systems or any number of other intelligent algorithms in order to research and produce better, valuable results. Furthermore, only 30% of doctors kept in touch through digital messaging with their patients, and only 24% also presented their patients with a means to obtain or view their records in a secure, easily available, online platform. This again is part of a true inter-connected system, which should not only collect medical data, but also share it, both with organization capable of putting it to use, to research medical advances, and with the patients themselves, to offer them an easy way to manage their own data, without the need to spend valuable time (both theirs and their doctors’) waiting in lines, or traveling across the city.
In effect, while more and more medical entities drive forward towards this digital infrastructure, progress is slow due to various factors, chief amongst which being money, but far from the only one. A very important issue regarding this endeavor is the entire legal apparatus that must first and foremost take care of the patients’ right to privacy and personal information. How will the patients’ data be share securely, privately, without risk of theft? How will this distribution of information work while maintaining the customer’s right to have complete control over all information that pertains to them? This is an especially difficult matter to entertain, since the rise of cyber-attacks and data theft has forced governments to strengthen policies regarding private information crossing the internet, as well as extend and enforce the rights of citizens to exert their wish over how their data is being handled. This means that such an inter-connected medical platform would need a highly sturdy and scalable information and network security infrastructure, further massively driving up the costs of such a project.
Another important issue is the socioeconomic situation of a population. In the 2014 Stanford Medicine X event, California was given as an example of a society where only 43% of people served by health organizations had access to smartphones. Since a fully digital medical platform would require the real-time exchange of information with patients, having them without access to an internet-enabled smart device would severely hurt the effectiveness of the system. As such, a difficult question arises: Is the world even ready to fully take advantage of an inter-connected platform, when a significant part of the population doesn’t even have the means to participate in this digital exchange of data?
Chapter Two. Platform Design
Hardware
This chapter refers to the collection of physical devices on which the networked system resides. It is primarily composed of a central server hub, in this case a PC, and a client, represented by an Android smartphone and its connected smartwatch.
Android Smartphones
Dating back to 2008, the first smartphone to bear the Android system was HTC’s Dream (initially branded as T-Mobile G1). It represented the first attempt at a physical handheld terminal operating on completely open-sourced software. This OS being still nascent, it didn’t rely quite as much on its touchscreen as modern phones, having 5 different navigation buttons on the bottom of the phone as well as a physical QWERTY keyboard under the screen.
An important aspect of this phone represented its SOC, the Qualcomm MSM7201A, which helped raise the company’s market coverage as the Android OS grew. While Qualcomm previously released a number of mobile SOC’s, for various phones and smartphone attempts, they would never compare to the success the company would have after being widely adopted by Android smartphone manufacturers. The MSM7201A SOC shipped with 192 MB of RAM and an expandable 256 MB of internal storage. In terms of network access, it supported quad-band GSM (850/900/1800/1900 MHz) and GPRS/EDGE, dual-band UMTS (1700/2100 MHz) and HDSPA/HSUPA.
In terms of CPU, the MSM7201A had multiple cores, but they lacked the multiprocessing capabilities of current mobile SOCs, instead delegating a single core (in this case the ARM1136J-S) to be used by the OS and running applications. The others were dedicated processor cores, generally on the following tasks:
A decoding/encoding DSP for media (specifically QDSP5000)
An ARM9 baseband processor
A decoding/encoding DPS for telephony (specifically the QDSP4000)
The SOC also contained various other units and controllers such as dedicated 2D and 3D graphics hardware, several interfaces and an AXI controller (Advanced eXtensible Interfaces, memory controller, used for its – at the time – high performance and clock frequency).
Although later versions have come to support the x86/x64 and even MIPS (Microprocessor without Interlocked Pipeline Stages) architectures, to this day Android’s main hardware platform has widely remained the ARM (ARMv7 and ARMv8-A to be precise). Initially running on 32-bit systems, the recent Android 5.0 “Lollipop” update has added support for 64-bit SOCs. As for specifics, the official requirements state that for a current device running Android “Lollipop”, the RAM memory should be a minimum of 512 MB and growing to 1.8 GB if the device utilizes a high-density screen. Previous android versions such as Android 4.4 could support smartphones with as low as 340MB, but the system performance would be far from ideal. Graphics acceleration is also required, represented by an OpenGL ES 2.0 compatible GPU (graphics processing unit).
In addition to the actual hardware necessary to run the OS, originally the physical platform contained many other components such as orientation sensors, GPS, tools to measure acceleration, pressure and proximity obstacles, not to mention the touchscreen itself. Some of these were even required, since the entire system started as a mobile phone OS, and components like microphone, camera, touchscreen, were mandatory to run Android. These all were loosened up in recent years, as the OS has started to transition far beyond just smartphones.
It is important to bear in mind that an Android system, be it a mobile phone or a TV setup box, is essentially a Personal Computer. This was however not always the intention, and the main issue at its inception – and still is – is that of inserting hardware powerful enough to compute complex tasks, on a small enough physical platform to be handheld. As such, a modular approach as in desktop PC’s has always been deemed near impossible, in terms of both spatial considerations and of performance. A desktop PC has a dedicated Central Processor Unit mounted on a motherboard which serves as a backbone for the storage medium and RAM components, as well as an optional dedicated Graphics Processing Unit. This demands the use of a large amount of physical space. Since modularity was no longer a driving concern, this has enabled the focus to be shifted on optimizing and integrating dedicated hardware components in a single, inseparable physical platform.
Thus was born the idea of a System on a Chip. They are integrated circuits that converge all the needed hardware elements of a system onto a single chip of reduced size. Below is a rough diagram of a typical such SoC and its main components.
Fig. 1 – Rough Diagram of SoC Components
Generally, though, a SoC contains on its platform the following elements:
The CPU (single or multi-core processor), which is the most important part of the chip. It is a computing component with one or multiple independent actual processing units (usually called cores). Here is where the program instructions are read and executed. Most commonly, the processors found in SoC’s are based on ARM architecture, as the key feature of this structure is high efficiency, low power, ideal for use on devices where available energy is an important issue.
The GPU (graphics processing unit) or media processors, the second most important component, deals with the actual interface between the SoC and the user: the screen. It renders the code dealt with by the processor, to present the imagery required for the user to interact with the device. Unlike the CPU, it’s architecture is based on parallelism and multi-processing, containing much more cores than the main processor, in the range of hundreds. This is applicable as the GPU’s task is not to read and execute code sequentially, but to render the numerous objects and visual data in a scene, and project these onto the screen’s pixels.
The Vector (Co)Processor or array processor, is a central processing unit that has an instruction set which operates with vectors (as opposed to scalar objects in a regular processor) and single data items. Offloading processes to this element greatly improves the performance on certain workloads.
The Bus interconnects all the elements of the system. It is characterized as either proprietary, or industry-standard. To further increase efficiency and reduce latency, the data between interfaces and memory can be directly routed by DMA controllers, completely bypassing the processors.
The Memory the same purpose as it does on any other platform, to store the data that the program needs as it runs. Since SoC’s are low-power, their processes were never that memory intensive either, as such RAM capacity was never one of the bigger issues. The introduction of the 64-bit processor, allowing more than 3GB of RAM, was not such a momentous event as it was on the desktop/server side.
Northbridge & Southbridge – these are chipsets which handle various communication and I/O functions. The Northbridge is tasked with the communication between the central processor and all the other components. The Southbridge, while not always present, is responsible for some of the I/O connections.
The Cellular Radios are also central to a smartphone SoC as they contain the necessary modems for mobile operators to ensure the device’s phone connectivity. They are tasked not only with the traditional analog voice messaging and the SMS (Short Messaging System) communication, but also allow for wireless internet access through various standards. This technology started with GSM (Global System for Mobile communications) and currently the most common standards are the 4G/LTE.
Other Radios – Common to smartphones and to a lesser extent, any SoC-based device, are any number of other dedicated radios, ranging from GPS/GLONASS (the GPS standard used in Russia), Bluetooth, Wi-Fi, etc.
Below are the block diagrams of two common mobile SoC’s, the Tegra 3 (Nvidia manufactured mobile SoC, commonly found in high-end tablets) and the Snapdragon S4 (Qualcomm SoC, operating mostly in smartphones).
Fig. 2 – Block Diagram of nVidia Tegra 3 SoC (Left) and Qualcomm Snapdragon 4 MSM8960 SoC (Right)
What is worth noting is that Nvidia’s platform contains only the processing components of the SoC, that is the CPU, GPU, memory controller, video-audio encoders and decoders, and output streams. Qualcomm’s SoC however, managed to integrate various other elements as well, such as GPS, radio data modems, Wireless Antennas and modules, etc. The main reason for this is Qualcomm’s advanced 28nm manufacturing process, while Nvidia still used a 40nm lithography at the time. Since the latter’s SoC was still among its first forays into mobile processing, it couldn’t yet compete in power efficiency. Instead, Nvidia used its experience as one of the world’s largest manufacturers of desktop and server Graphical Processing Units, to design a mobile SoC that is still decently efficient, but also built from the grounds-up with graphical performance in mind. As such, the Tegra line of SoC’s have been mostly used on devices where battery life can be compromised in favor of increased performance – android and windows tablets.
As mentioned before, the main two architectures used in SoC manufacturing have remained largely unchanged, the ARM and x86 architectures, with the former dominating the mobile SoC market. It did not, however, start as mobile system. Instead, the ARM architecture was initially designed to be used in a PC, specifically the Acorn Archimedes. When this device was released, the ARM2 processor it had installed – capable of using up to 4MB of RAM and 20MB hard drive storage – contained a comparatively low number of transistors, making the system impressively simple. It had 30000, while the competing processor from Motorola sported more than double this amount, at 68000. Besides the low transistor count, the ARM processor was also designed to use a different, simpler instruction set, which would evolve to define the RISC (Reduced Instruction Set Computer) architecture. All of this made ARM’s processor so efficient that it was able to outperform the 80286 from Intel while using less power.
The other architecture, the x86, had its name coined from the family of processors Intel launched since 1978, all starting with the 16-bit 8086 CPU. Presently, the x86 architecture is most commonly 32-bit and used in most personal computers. The main drawback of the architecture is that because of its focus as mostly a desktop PC architecture, extreme breakthroughs in energy efficiency were never successful – performance was always the sole desired element. As such, it was never able to outperform the ARM architecture in terms of performance/cost ratio, with the closest achievement being the Atom platform, which, although impressively efficient, lacks the power to compete with ARM in raw performance.
The mobile SoC’s themselves are themselves broadly characterized by two things specifically: the CPU (Central Processing Unit) and the GPU (Graphics Processing Unit).
The Graphics Processing Unit
In present day, a crucial component in any mobile SoC is the graphics processing unit, the GPU. At the historical onset of SoC’s, the main task of the graphics card was rendering the 3D objects and scenes, and projecting them onto the screen’s pixels. Presently however, GPU’s have grown to be more and more central to the entire system. On one hand, their main job, that of screen rendering, has moved beyond applications, and on Android systems it renders the entire user interface. On the other, the GPU has intrinsically much more parallelism than the CPU, being designed from the grounds up with many more cores in mind. Whilst not as universally powerful as its desktop counterpart, modern mobile GPU’s do sport as much as 256 cores (in the case of Nvidia’s Tegra X1 chip). This native parallelism allows the GPU to receive certain thread-intensive tasks from the CPU, thus giving the central processor the option to distribute the load better.
Among the first manufacturing lines of mobile GPU’s was the Mali, produced by ARM. As opposed to other mobile graphics units, this chip did not have an actual built-in display controller to drive the monitor. Instead, the chip contained a dedicated 3D engine to process the graphical rendering of 3D objects, scenes and effects (and, in the case of Android, the UI), and offload it into the memory. While still maintaining a few models found in certain low to mid-range mobile devices, the Mali has mostly been overshadowed in by its competitors. They have however undergone a resurgence in recent years thanks to Samsung, who paired a Mali GPU with their impressive proprietary Exynos SoC on their flagship smartphones. This specific GPU, the T880MP12, is Mali’s singular presence on high-end devices, but it has been successful enough to compete against Qualcomm’s leading market SoC.
Fig. 3 – GFX Benchmark 3.1 Manhattan Test of various devices, showing Exynos’s Mali GPU keeping up with market leader Qualcomm S820.
Another early entrant in the mobile GPU market was Imagination Technologies’ PowerVR. While having a strong lead at the beginning, it has since dropped to mostly certain low-end device, focusing on low powered media chips. Unlike Mali however, PowerVR GPU’s aren’t actually manufactured by PowerVR themselves. Instead, their design and patents are being licensed to other companies like Intel, Samsung, Apple, etc. Currently, the only high-performance PowerVR chip is manufactured by Apple and used on their Iphone devices’ SoC.
Perhaps the most well-known mobile GPU remains industry leader Qualcomm’s Adreno. Originally branded as Imageon, it was developed by ATI as far back as 2002 to be used in handheld and mobile devices. After AMD’s acquisition of ATI in late 2006, it was rebranded as ADM Imageon, only to be officially discontinued in 2008 due to AMD’s internal restructuring and company problems. It was then finally bought by Qualcomm in late 2008 at the price of $64 million. Since AMD still retained the preceding branding, Qualcomm change the chipset’s name to Adreno. The GPU has since been the official integrated GPU in Qualcomm’s series of mobile SoC’s. In recent years the Adreno line has seen massive advancements in hardware and software technology, with Qualcomm aiming to bridge the gap between desktop and mobile graphics, while at the same time striving to transition the GPU from a purely graphical drive, to a general purpose processor capable of heavy parallelism. Key imaging and multimedia factors such as 4K recording, high framerate UI and screen capture, decoding and encoding of high data streams, are all capable of using this type of processing.
Fig. 4 – History and Evolution of Qualcomm’s Adreno GPU
Adreno’s 400 series brought even more improvements to the chip, especially addressing this API transition. It was the first Qualcomm GPU to support graphics API’s DirectX 11.2, OpenGL ES 3.1, while also employing Google’s new AEP (Android Extension Pack). This extension enables support for ASTC texture compression, compute and geometry shaders and hardware tessellation, all of which represent as of yet PC-specific techniques.
Fig. 4 – Adreno 400 series technical improvements over its predecessors
Most notable is OpenCL 1.2 Full profile support, alongside Microsoft’s DirectCompute API, which sets to further advance the GPU’s role as a general purpose processor. In terms of API’s, the 400 series has also improved RenderScript acceleration over its preceding 300 line.
Another area to be improved was the texturing performance, which received anisotropic filtering support for higher levels, while maintaining a low energy footprint. Support for ASTC (Adaptive Scalable Texture Compression) also improves the level of detail and overall texture quality while keeping the same performance levels. To aid this increase in texture capacity, Qualcomm has increased the texture cache sizes as well, and the general purpose L2 cache.
Lastly, the ROP’s (Render Output Processors), have received performance enhancements and polish. These elements are tasked with the final calculation of scene and object pixels and projecting onto the screen. The Z stencil is a buffer used to manage image depth coordinates (as an object is rendered, the buffer stores the depth of the generated pixel, the z coordinate). This area has been improved, allowing for faster depth rejection which lowers the number of pixels the GPU must compute to render and draw any particular scene and in turn removing unused, hidden regions from the pipeline.
Qualcomm’s upcoming 500 series is expected to increase Adreno’s power even more. Barely released and featured only on select mobile smartphones, the Adreno 530 pushed graphical performance much further than its predecessor. As seen below, the new GPU doubles the 430’s performance. Even more interesting is the fact that the integrated SoC GPU is starting to match and even outperform desktop-grade integrated GPU’s, like intel’s HD 520.
Fig. 5 – GFX 3.1 Manhattan (1080p offscreen) Comparison Test (Left) and GFXBench 3.0 – Manhattan Offscreen OGL Off Screen Comparison Test (Right)
The 530 Adreno GPU is clocked at 624 Mhz and in terms of API’s, supports DirectX 11.1, OpenGL ES 3.1 AEP, OpenCL 2.0, Direct3D 11.1. It also boasts the introduction of the Vulkan API to mobile devices, allowing the rendering and execution of desktop-grade graphics programs, on a Snapdragon 820 equipped device.
The last contender in the Android mobile GPU market is Nvidia’s famous Tegra X1 “Superchip”. The Tegra line was first announced and released within 2008, with its first chips mainly aimed at smartbooks, smart media devices and general mobile internet devices, and only tentatively at smartphones (they lacked their competitor’s power efficiency). Eventually, in 2009, Nvidia started supporting Android as an OS to run on Tegra and unveiled the first Android-specific Tegra, the 250, in 2010. Its next iteration, the Tegra3, ventured further than mobile devices and was used in Audi’s in-vehicle information and digital instruments displays in its entire line of cars since 2013. The first to establish Nvidia’s chip as an Android powerhouse was the Tegra K1, which powered both Nvidia’s proprietary Android devices, and the Nexus 9 Android tablet, all of which have exhibited massive performance gains over their competing products.
The current chip, the Tegra X1, is set to overthrow the competing Android graphics SoC’s once again. First and foremost, the “superchip” is the first Android GPU to provide a staggering 256 computational cores (branded CUDA cores) as well as using the company’s proprietary desktop architecture, the Maxwell architecture. Compared to its predecessor, the X1 features support for OpenGL 4.5, DirectX 12, AEP and CUDA 6.0 (their proprietary development environment). Where the chip impresses the most is its processing powering, being capable of over 1000 GFLOPS for 16-bit operation workloads, and more than 500 GFLOPS in fp32 operations (floating point operations). Not only is the X1 able to deliver twice the raw performance and efficiency of the K1, but it also marks the chip as the world’s first TeraFLOPS mobile processor.
Fig. 6 – Block Diagram of Tegra X1 Maxwell Architecture
The Tegra X1 SoC distinguishes its architecture with the following key features:
ARM Cortex big.LITTLE A57/A53 64/32-bit CPU architecture, used in most high-end devices due to its high performance while maintaining low power consumption.
Maxwell GPU architecture, coupled with 256 cores.
End-to-end 4k 60 fps pipeline, capable of decoding H.265 and VP9 streams in 4K, 60fps
20nm Lithography Process, a significant reduction from the K1 line.
What is interesting to note is the choice of investing development effort into the high performance pipeline. While 4K displays are far from mainstream, especially so regarding mobile Android devices, it is nonetheless the next evolutionary step regarding screen resolution. Full 4K adoption is a significant distance away, more so when taking into account the adoption rate of FullHD resolution, which is still competing against HD screens. A cautious and bold move at the same time, Nvidia opted to architect their high performance, end-to-end 4k@60fps pipeline so it may be used on general streaming services such as Youtube, Netflix, Amazon Now, Twitch, communication services such as Google Hangouts, and to enable 4K screen casting with Google proprietary Chromecast protocol. To achieve the desired 60 frames per second on this high resolution, Nvidia has optimized most of its components specifically for this, from the high speed storage and memory controllers, to the image signal and graphics processors, and video decoders. This is significant, as to date no other mobile graphics processor has managed to produce 60 frames per second in 4K streaming. Those that are even capable of 4K streaming are locked on 30 fps.
Fig. 7 – Block Diagram Representing X1’s High Performance Video Pipeline
The Central Processing Unit and SoC’s as a Whole
In terms of CPU’s, disregarding the Chinese market, there are almost exclusively only two SoC’s to use completely different architectures. The first of them, industry and market leader, is Qualcomm’s Snapdragon line. The first devices to contain this SoC were released in 2009, with Qualcomm gradually gaining market momentum, culminating in 2014 when Android phones have started using the company’s Snapdragon SoC almost exclusively.
The Snapdragon uses Qualcomm’s proprietary CPU design, the current iteration of which is codenamed Kryo, present in their 820 SoC. The company has managed to drastically improve the performance over its predecessor, the 810, going as far as marketing a near double performance and efficiency. This is achieved due to the new 14nm architectural lithography and general improvements across the board. The new chip also contains several proprietary components, such as the X12 LTE modem, the Hexagon DSP (Digital Signal Processor) and Spectra ISP (Image Signal Processor). The 64-bit Kryo CPU is clocked at 2.2Ghz and contains 4 cores, as opposed to its predecessor’s 8 cores – Qualcomm’s claims are that the majority of Android apps still aren’t optimized for true multi-core processing, and so the core reduction won’t affect performance.
Fig. 8 – Qualcomm Snapdragon 820 Marketed Improvements
They elaborate that “The most important thing to have is peak single-threaded performance, as most of the time only one or two cores are active.” Moreover, “When most phones are playing games, web browsing, or just word processing, usually only 1.5 cores are active at any one time”.
Regarding connectivity, Snapdragon’s 820’s X12 LTE modem further boosts network performance by adding Cat. 12 LTE support. This means that a device equipped with this SoC could theoretically reach speeds of up to 600 Mbps download and 150 Mbps upload. This represents a significant improvement over its predecessor’s X10 modem, which is only capable of achieving 450 Mbps downlink and 50 Mbps uplink. Further future-proofing their design, Qualcomm also developed the X12 as the first modem in the world to support LTE on unlicensed spectrum bands.
When talking about real world performance, the new chipset, while not quite reaching Qualcomm’s claims of double performance over the 810, is still a massive improvement. The following real-world benchmark tests provide relevant comparisons between the nascent chipset and its predecessors.
Fig. 9 – Antutu (left) and 3DMark (right) scores comparing Snapdragon 820, 810 (Nexus 6P) and 808 (Nexus 5X)
Antutu’s benchmark reveals Snapdragon 820 impressive performance, scoring it 54% higher than the 810. In 3DMark’s Slightshot GPU-centric test, the 820 managed to score 34% more than the 810. A far cry from the touted double performance, but a significant improvement nonetheless.
Battery life is not so easily comparable, as there are no current smartphone’s to have similar specifications in terms of screen and Android version, while sporting the 810 SoC on one hand, and the 820 on the other. Having identical screens is mandatory as it is the most powerful battery drainer in this mobile ecosystem, with as much as 40-50% of power consumed. Identical Android version is also ideal, as certain power management tweaks are being slowly implemented in each revision. Real-life tests, with no empirical standard, haven’t left reviewers impressed. They have stated that while performance is certainly adequate, and the phone will last through a day – the length of time which has become the norm for a decent smartphone – it will not last significantly longer than that.
Qualcomm is also priding themselves in their proprietary ISP (Image Sensor Processor), the Spectra. It uses the latest 14-bit image sensors and provides such features as hybrid autofocus and multi-sensor fusion algorithms. This allows the camera ISP to capture a vast range of colors, and, most importantly, to make use of computational photography. The Spectra itself is accompanied by an additional two ISP’s.
In terms of raw specifications, interesting details are the added support for high frequency – 1866Mhz – LPDDR4 RAM memory speed, the compatibility with both USB 3.0 and 2.0 and the Tri-Band Wi-Fi support coupled with a 2×2 (2-stream) MIMO configuration.
A veritable contender of the market at the moment is Samsung’s Exynos line of SoC’s. While the company is as of yet unable to mass produce enough of them to fill the order of Galaxy phones (not to mention the various legal issues surrounding the marketing of foreign design and manufactured mobile SoC’s within the US border), and so still equips a significant portion of its devices with Qualcomm SoC’s, its proprietary Exynos line has nevertheless continued to impress in recent years. Developed to contest Qualcomm’s position as market leader, Samsung’s SoC is built with increased efficiency as its goal. Indeed, as evidenced below, the latest Exynos 8890 cannot outperform Qualcomm’s Snapdragon 820 in raw power. It does, however, come dangerously close to it.
Fig. 10 – Antutu Benchmark Comparison of Various Mobile Devices
Exynos’s edge comes from its 8-core CPU design and ARM’s big.LITTLE paradigm. The central idea is pairing high-power cores with low-energy ones in order to distribute the load as efficiently as possible. This allows SoC to deliver increased parallel processing performance, while maintaining low average power consumption. ARM also designed the needed software to manage all the tasks and load them to the appropriate core based on their performance needs (the big.LITTLE MP). This is what allows the Exynos SoC to achieve its impressive power efficiency. Additionally, the 8-cores allow Samsung’s SoC to outperform Qualcomm in a true multiprocess environment.
Fig. 11 – GeekBench 3 (Multicore) Performance Comparison
It is however important to note, that according to market competitor Qualcomm, true 8-threaded optimization is near to non-existent in Android, and x8 multicore performance benchmarks are uncharacteristic of real-life experience.
While Samsung’s Exynos 8890 – codenamed Exynos 8 Octa – uses, as mentioned before, ARM’s big.LITTLE design, it redesigned half of the standard processors. The 4 high-power cores are in fact custom-designed by Samsung based on the 64-bit ARMv8 architecture. The company claims they provide 30% improvement in performance, while also increasing power efficiency by 10%, compared to its predecessor – the Exynos 7 Octa. In order to fully utilize the benefits of the big.LITTLE system, Samsung also developed their own SCI technology (Samsung Coherent Interconnect), which allows full cache-coherency between the Cortex-A53 cores – the standard cores –, and their proprietary custom cores.
Fig. 12 – Samsung’s big.LITTLE design, employing their SCI technology.
The Exynos 8 is manufactured under the same 14nm lithography as Qualcomm’s 820. Samsung’s device however uses the 2nd generation FinFET LPP (Low-Power Plus) technology. Specific to the FinFET transistor is the placement of the gate on multiple sides of each source and drain, allowing increased effectiveness on current leakage control. Furthermore, carriers are now able to move across the various surfaces of the fins-shaped 3D structure in the transistor. This all purportedly leads to much faster computational speeds and significant improvements in performance and efficiency.
Fig. 13 – Visual construction comparison between conventional FET transistor, having a 2D planar Structure (Left) and the FinFET transistor, having a 3D fins-structure (Right)
In terms of connectivity, the Exynos 8 sports the same performance as its Qualcomm rival, Category 12/13 LTE modem, capable of 600Mbps download speed and 150Mbps uplink speed. Other similarities include the high-speed RAM support and the supported 4K display resolutions.
All things considered, the two most powerful mainstream SoC vendors at the moment are surprisingly similar in performance and power efficiency, with differences of at most 10% in key areas. Should Samsung deem it worthwhile to invest in mass producing their Exynos chipset, they may very well completely abandon Qualcomm’s solution, and in doing so spark a much-needed dichotomy in the Android mobile SoC market. While Nvidia’s solution is also present, at the moment its focus remains on high-power, high-performance cases, staying clear of the mobile phone market.
Android Wear Smartwatches
The smartwatch is defined as a computerized wrist device, with much more functionalities than merely keeping time. Certain early digital watches did indeed have the ability to perform various basic tasks, such as rudimentary gaming, or had a built-in calculator, but the smartwatch tag denotes a device more akin to a wearable computer, able to be programmed with new capabilities, as well as having true connectivity with other smart devices. As such, the term has come to be synonymous with touch screen enabled watches, having such peripherals as accelerometers, compasses, thermometers, heart-rate readers, speakers, even cameras or GPS antennas. Their programmable software should also support applications like media players, schedulers and personal calendars or digital maps, as well as ways to interact with the watch’s various sensors. Currently, a method to connect to a personal smartphone is also seen as mandatory for a smartwatch. Most, if not all, use a Bluetooth connection to establish synchronization between the two devices, but some are also capable of Wi-Fi, GPS, or even radio communication.
A subsection of smartwatches has been specialized on sports features and have been universally denominated as “Sport Watches” or “Fitness Trackers”. These serve as either health or running recorders. The latter use built-in GPS to enable position tracking before a workout. The user starts the activity on the watch, proceeds to complete their workout, then uploads the activity to a smart device, a smartphone or a computer. Their routes are thusly synchronized with the fitness service of their choice and allows them to ascertain a broad idea on their performance. Advanced features on these watches include training programs (having a set of achievable milestones directly on the watch to help maintain workout consistency), lap times (ability to recognize retreading on a predetermined route and resetting the timer), real-time speed display, integration and display of health-specific sensors such as heart-rate or blood pressure, as well as multi-sport capabilities, such as serving as a pressure/depth gouge for diving, or synchronizing itself with a cadence sensor, which measures bicycle wheel revolutions per unit of time.
There are also certain hybrid watches that are composed of a traditional wristwatch body and functionality, but do have the communication protocol and hardware to synchronize with a smartphone. This is usually achieved through dedicated apps within the smartphone which perform the user application exchange of data between the two devices. These watches can then receive alerts from a smartphone and notify the user of such information as calls, missed calls or received SMS’s. They are, however, usually more rudimentary than true smartwatches, and as such the integration with the mobile device is usually restricted to non-OS specific tasks, i.e. the watch is only able to display phone calls, SMS’s and alarms.
Fig. 14 – Comparison between the Sony Ericsson MBW-150 (Left), a hybrid watch that had a single line of amoled screen, capable of display caller number/name or SMS received when paired with a phone, and the Galaxy Gear S2 (Right)
Historically speaking, the first year to present a fully-functioning smartwatch, colloquially known as the “year of the smartwatch”, was 2013. This is because, according to device analyst Avi Greengart, of research firm Current Analysis, electronic components had finally gotten small and cheap enough to construct what technically amounted to a smaller, cheaper, smartphone. In that year, a significant number of companies revealed that they were in development with smartwatch devices, specifically: Toshiba, Sony, Samsung, Qualcomm, Microsoft, LG, Foxconn, Google, Blackberry, Acer and Apple. Early fears regarding such wearable devices, that still keep many manufacturers from investing a sizable amount of development effort, were mainly the physical size of the smartwatch itself – which would be significant with current technology – and the universal issue with all portable electronics, the insufficient battery life. Other issues were the display technology, as LCD would not allow the watch to function with its screen on constantly, or it would drain the battery dramatically faster. The only technology to allow a reduced power mode was OLED. This is because the LCD, by design, is powered by a backlight spanning the entire screen, so at any given time, if the screen is turned on, the entirety of it would consume energy. Conversely, an OLED display has all its pixels composed as individual organic LED’s, able to turn them on or off one at a time. This would allow a rudimentary display of a fraction of all the pixels to show the time, or any other significant information, while most of them would be turned off to save power.
2014 saw the actual large release of several smartwatches from various producers, as well as the mainstream software adoption of what would become one of the largest competitors on the smartwatch OS market: the Android Wear. The OS itself is an extension of Android’s environment, and as such is restricted by the same hardware limitations and necessities. Notably, it requires either a 32-bit ARM, x86, or MIPS platform – only the former of which having been actually used in this situation. While peripheral requirements are rather lax, all Wear watches support Bluetooth connectivity with a smartphone, and in fact require this synchronization for advanced functionalities. Wi-Fi is supported on many devices, but is secondary to the Bluetooth connection.
The first watch to operate on the Android Wear platform was Motorola’s Moto 360 in 2014. As this was not only the company’s first foray into wearable technology, but also Google’s first real case – and more or less its proof of concept – of Wear OS, hardware specifications on the SoC side were less than stellar. The operating system was run on a Texas Instruments OMAP SoC, using a 1Ghz single-core Cortex A8 processor. The watch was also powered by a 320mAh battery, which would last roughly around 12-16 hours of use.
Fig. 15 – Block Diagram of the Qualcomm Snapdragon 400 SoC, described below.
Currently, almost all Android Wear smartwatches are built on the Qualcomm Snapdragon 400 MSM8926 platform. Specifics of this SoC include a 32-bit ARM Cortex-A7 processor, having 4 cores running at 1.2Ghz frequency, and the low-end, low-power Adreno 305 graphics processing unit. The usage of this specific SoC is due to its relatively cheap production costs, as well as its considerably efficient processing. The SoC itself has a Digital Signal Processor capable of utilizing mobile broadband signals, but few watches support mobile connectivity.
In terms of displays, as stated previously, most current Watches have been designed with an OLED-type display, either P-OLED (Polymer-OLED) or AMOLED (active-matrix OLED). Although some still use the classic LCD panel, due to it being significantly cheaper, most agree that a staple feature of a watch should be to have a permanent rudimentary display so as to easily glance information. Since LCD’s consume just as much power, regardless of what is shown on screen, having an always-on mode drains the battery massively. As such, although the option within the Android Wear exists, the so-called ‘ambient mode’ is not encouraged by manufacturers of smartwatches with LCD displays. Those that chose not to compromise in this aspect, and have implemented an OLED display, bear no such issues, with the observation that they encourage watchface developers to incorporate low-detail, mono-chromatic faces in always-on mode.
Web Server – Hosting and Physical Considerations
Whenever an individual or a company develops a web app or a website, the next step is making it accessible to users around the world, through the World Wide Web. This is done via a web hosting service, which is a type of internet hosting service that manages a server – either owned by a hosting company, to be leased to the client, or owned by the web developer’s company itself – capable of holding and running the required software and web application.
The server’s primary function is therefore to store, process, and deliver web pages in response to client queries. This delivery is done using HTTP (Hypertext Transfer Protocol), but the data itself, although generally HTML documents, can be in any number of formats, from images, to scripts, to hard data. This entire operation is initiated by a client, a user agent, usually a web browser, by making a request on the World Wide Web via HTTP, to the website’s known Domain Name. This domain name is the site’s recognized and official address on the internet, and is held and authorized by a Registrar, a company, organization or commercial entity specialized in the management and reservation of such Internet domain names. When the client calls the site’s domain name, the request is picked up by a DNS service (Domain Name System), which is either provided directly by the domain name registrar, or an independent entity authorized to host DNS records by an official registrar. This service has the task to pick up the requested domain name, and resolve its digital address in the form of an IP address, an Internet Protocol address. This is the logical designation describing the actual location within the internet of the requested website. The client then has the information required to find and communicate directly to the desired server, and it in turn receives the client’s logical IP address so the communication can work both ways.
Fig. 16 – Diagram displaying the basic elements tasked with completing client-to-website communication.
The web server itself is practically a computer that processes these HTTP requests and responses. Generally, the term is used to refer to both the hardware appliance and the underlying software or operating system that supports the website/web application. Loosely speaking, a web server can be designed out of an ordinary home PC, but with numerous caveats, significant of which are that firstly it needs to be up and running constantly and consistently, and second that it needs to be monitored regularly for issues. Furthermore, the hardware requirements for full-fledged web servers are generally higher than what an average home computer can supply.
As such, there are certain considerations that should be taken into account when designing and deploying a physical web server. First of all, it is important to note that there are no hard requirements for hosting a website, as there each website consumes as much resources as it needs to do its job and service all the connecting clients. As such, a rough idea of the managed traffic can save a lot of time, money and effort into designing the ideal server. Specifically, the two major metrics are client requests per unit of time, and connection bandwidth. A server needs to ensure that HTTP requests are managed within a minimum timeframe, while maintaining a decent bandwidth for the desired data to be transferred.
These physical requirements revolve mostly around the same areas that define a personal computer: RAM, CPU and Storage Media. Although progress is being made to adapt a GPU to general purpose processing owing to its massive parallelism, web hosting is not considered an adequate recipient of such investment yet, as it does not have inherent multi-threaded tasking in the order of hundreds of threads. An interesting and specific technique used in highly intensive servers to increase performance is implementing a Multiprocessor System. Certain manufacturers produce specialized motherboards that contain two or more CPU sockets, as opposed to conventional motherboards which support only one. This permits the use of dedicated cache memories for all processors, allowing for maximum performance, in contrast to purely multi-core/multi-threaded systems. The drawback is that the software must also be designed to take advantage of multiple CPU’s.
Fig. 17 – Graphical Layout of Dual-CPU Motherboard (Left) and Basic Block Diagram of Symmetric Multiprocessor System (Right)
CPU and memory go hand in hand in server systems, as there are specialized forms of both to be used in high-endurance environments where faults need to be minimized. A specific type of memory is used, called EEC memory (Error-correcting code memory), that has the capability to detect and correct the usual kinds of internal data corruption. This type of memory can therefore prevent or at least significantly reduce the number of crashes due to memory errors, a very important factor when considering mandatory permanent uptime. The main downsides are slightly lower performance, due to the memory controllers time needed to perform error checking, and the memory’s higher costs, due in part to the additional required hardware, and in part to the lower manufacturing volume for such a component (since they are only used in specialized environments). Furthermore, this type of memory requires specialized components on the CPU controller, present only in Intel Xeon processor line.
The Xeon itself is Intel’s brand of microprocessors designed and marketed specifically at non-consumer environments, such as workstations, servers and embedded systems. Besides their support for EEC memory, these processors differ from their mainstream counterparts in their support for multi-socket systems as well as much higher core counts. This makes them ideal for multi-task, highly parallel workspaces.
Recent Xeon models have also placed heavy emphasis on security features. These include the so-called “Intelligent Orchestration”, which supports cache and bandwidth monitoring to aid IT specialists in providing more reliable service levels, as well as improved, high-performance encryption, with a purported up-to 70% increase in per-core performance on encryption algorithms – this potentially allows practically transparent protection of data on workload transmission. Furthermore, various software and hardware technologies have been implemented to prevent stealthy malware and zero day attacks. These techniques on part employ privileged access in order to restrict platform resources to legitimate processes, and on the other hand provide the support for new security implementations that use deep memory monitoring.
In terms of performance, Xeon processor distinguish themselves through massive parallelism when compared to Intel’s desktop counterparts, sporting as many as 24 cores – all the while employing Hyper-Threading, so boosting the number to 48 threads – and as much as 60MB of memory per cache per socket (compared to 8MB of cache on Intel’s top of the line i7-6700K consumer processor). They also permit up to 24TB of memory per 8-socket system, compared to a maximum of 64GB of RAM on the aforementioned 6700K Skylake processor. In terms of sockets, current Xeon processors are the only existing processors to support up to 32 sockets on an individual workstation, adding to a potentially massive thread count of 1536.
Software
This chapter refers to the software aspects of all platforms involved, specifically to the entire Android ecosystem, as well as the software on the server side.
Android OS Concepts and Paradigms
Even though the Android operating system is currently widely used in a multitude of devices, from TV setup boxes, to cars’ infosystem, to even amateur robot development kits, it did start with nothing but smartphones in mind, focusing solely on the touchscreen as device interaction. This type of user interaction is designed to be intuitive, imitating real world direct manipulation gestures, such as swiping objects, tapping on interactive buttons or pinching the screen to magnify or reduce in size. Discrete character input is also implemented through a touch virtual keyboard.
Historically speaking, the Android system was in production for a very long time, with the original team starting their work as back as 2003. Two years later Google acquired the company and, although leaving the original team on board, it spearheaded the project in order to compete with Nokia’s Symbian OS and Microsoft Windows Mobile. The first Android smartphone was released in 2007 without much fanfare, as the world had already been impressed by Apple’s recently released iPhone, but nearly 10 years later Google’s OS has managed to obtain nearly 80% of the global smartphone mobile market.
A staple of the Android operating system is its open source nature. Even though most Android devices are indeed, shipped in the end with a mixture of open and proprietary software and applications – most specifically, Google’s proprietary services –, the OS source code itself is free and released on the internet. Owing to this, the system is very popular with tech companies that wish to utilize an already developed, low-cost and fully customizable operating system, either to keep costs down globally on high-tech projects, or in order to inject a cheap device into a low-income market. The downside to its completely open and customizable nature, is the lack of a centralized updating system. Since each manufacturer can and does modify their version of Android, Google cannot publish specific updates to each variation of their operating system. As such, many Android devices are left behind in terms of security updates, with even high-end recent models having late security patches. In 2015, research concluded that nearly 90% of all Android smartphones retained publicly known but unpatched security vulnerabilities because of their respective manufacturers’ lack of security updates and support.
Regarding the User Interface, as stated above, Android’s is based principally on direct manipulation, that is the implementation of touch inputs that simulate real-world actual gestures. These include actions like swiping, tapping or pinching the screen, each with an effect produced to most closely match reality, that is swiping actions generally move parts/objects within the UI, tapping interacts with button-like elements, and pinching in or out enlarges or reduces in size the object being manipulated. Since the OS can often be viewed as a full-fledged PC operating system, integration with external, physical input peripherals like game controllers, actual keyboards or mice, is abundant and supported via Bluetooth or USB. The device’s internal hardware is also designed to offer responsive user feedback, either haptic, through its vibration motor, or through interface modification in response to its various sensors.
The device homescreen is the UI hub, akin to a Windows machine’s desktop. It is the central and ‘home’ location in the entire interfaces, and is commonly populated by widgets and app icons, all of which are entirely customizable by the user. The icons themselves point to the specific app and launch it, analogous to program shortcuts on a Windows PC, while widgets present real-time, auto-updating content, such as news, weather or messages. This general layout however is so heavily customizable, that most companies dealing in android devices choose to significantly modify how it looks and navigates in order to distinguish themselves from their competitors.
Another universal element of the Android user interface is the status and notification bar, at the top of the screen. This is the primary node for delivering information updates to the user without inconveniencing them. The bar itself remains compact, with occasional flags detailing live notifications, until pulled down by the user. Executing this gesture reveals the full notification screen, detailing the collected app updates and notifications received that the user has not yet responded to.
The major functionalities of the device however come from the execution of programmed Applications, or “apps”. These are the Android analog of Windows’s programs; they are programmable entities written by developers using Android’s SDK (Software Development Kit) to achieve specific tasks. The SDK itself is a bundle of all needed development tools, including debuggers, libraries, device emulators, as well as tutorials, documentation and sample codes. The main and official IDE (integrated development environment) for Android’s SDK is currently Android Studio. This is a PC program – supported by all major operating systems – that integrates the SDK while supplying a graphical user interface to the programmer, in order to provide the optimum environment for application development.
The apps themselves are most often written using a combination of XML elements (for graphical representation of objects) and Java programming language (for the apps internal logical computing). Certain other languages can be implemented, such as C/C++ and Go, but these usually come with restricted access to the system’s API’s (Application Programming Interface).
The primary and only official nexus for acquiring Android apps is the Google Play Store. This is an app itself, and is part of Google’s proprietary set of services, all licensed under the ‘Google Mobile Services’ software. Because of this, the use of the Play Store is only officially allowed on devices licensed by Google, and is therefore unsupported on custom android variants. The store allows the browsing and download of applications, as well as background updating, providing increased quality of life over sideloading apps. The store also implements regulations regarding the distributed apps, in the hopes of protecting against malicious apps. Regardless, due to Android’s open source nature, third-party marketplaces exist, providing unregulated, unofficial applications. Some of these have the sole purpose to replace the Play Store on unlicensed Android devices, whilst others distribute applications that specifically violate various policies and so cannot be submitted to the Play Store.
Regarding the Play Store’s security functions, it is worth noting that Google employs a proprietary automate antivirus and antimalware system called the Google Bouncer, to verify each app at submission. Recently, they have implemented the use of deep neural networks to create a virtual intelligence capable of recognizing malicious software within the apps, malware that would escape a conventional antivirus. While the VI system is still new, Google is investing significantly in helping it learn, feeding it known problems and malware so it can have a database with which to compare each new submission.
Android’s system core itself, the kernel, is based on the Linux kernel, starting from version 2.6.25 implemented within the first iteration of the mobile OS. It has, however, brought many significant changes to it along the years, leading to some disputes between Google and linux developers, as the former did not show what was thought as a decent effort to implement these android changes and features back into linux. Some of them were ported, such as the autosleep and wakelock capabilities, but it took several attempts. A notable difference between the two kernel series, is that unlike Linux, the Android system restricts access to certain sensitive disk partitions, such as the “/system” one, in essence not giving users root access. Still, vulnerabilities can be exploited to obtain root access on virtually any Android smartphone, and is indeed the process by which open-source communities alter and experiment with various Android features within the system and kernel, in the hopes of improving some of them. Linux itself was chosen as a base kernel owing to its superior memory and process management, the ease with which multi-user, multi-level security restrictions and instances could be implemented, as well as its already open source nature and massive library sharing communities.
While built on top a linux kernel, the Android system itself represents the various superior layers in the stack, libraries and API’s, standalone application features as well as the middleware itself, the entire level between the kernel and applications. All of this is in essence Android’s development, as the linux kernel itself is worked on independently.
Fig. 2.2.1-1 – Android Architecture Diagram, displaying its software stack.
The execution of Java is done in a virtual machine, which used to be Dalvik prior to Android version 5.0. Specific to this process virtual machine was its trace-based JIT (just-in-time) type compilation, which actively reads and analyzes the code, then translating it while the application itself is still executing. This was changed to a AOT (ahead-of-time) compilation between Android 4.4 and Android 5.0 when Google started introducing the Android Runtime (ART) as the default virtual runtime machine. This new type of compilation gives up the translating of code mid-app, and instead it pre-compiles the entire bytecode at app install, essentially transforming android runtime application into native executables.
The libraries layer represents the specific Java libraries used by the Android system. These have a format of “android.x”, where x represents the subject of work that the library deals with. For example, the core library is android.app and supplies the access to the entire Android development model. Other notable libraries are android.content, android.opengl, android.os, etc. each used for various accesses or construction blocks.
The next layer, the Application Framework, represents the various service and tools that manage the entire running of any application. These run natively in the background, but developers are also permitted to call and access them for additional functionalities within the app. Certain core services are the following:
Activity Manager – roughly the most important element in the entire layer, it provides management for the whole application lifecyle, in essence controlling the application and enabling it to run.
Resource Manager – second-most important, this manager allows developers to access the system resources, which represent virtually everything apart from the java code. Since java is only the background coding, and an Android app actually uses XML files to visually populate the screen, these also fall into the resource category.
View System – the manager that provides the UI elements to be used inside the XML files in order to display graphical elements.
The final layer, at the highest level, represents the entirety of applications running on the Android device, offering the entire functionality and user interface (the homepage itself is an app). What is interesting regarding Android application is their runtime environment. It was mentioned before that among the the most important aspects of Linux to be taken into consideration when choosing the kernel, was its support for multi-user environments. This is because the Android system uses the kernel in such a way that each application is, as far as the kernel is concerned, a different user. Each of these users is given a unique ID, which is used to identify the application itself, and then the kernel employs this ID to give access to the mandatory files only to their respective application. Furthermore, since they are, in essence, linux users, each application is permitted to, and does run within its own virtual machine, isolated from all the other apps. All of these allows Android to create a secure enough environment in which each process only has access to its own resources. Several methods of file sharing exist, such as enabling two apps to use the same user ID, thus having access to the same resources, or through the use of Intents, calls to other apps or services, which shall be discussed shortly.
The applications themselves are each constructed around a combination of four components, each with a different functionality, usage, lifecycle and most importantly, each existing independently of each other, as a separate entity. These components are:
Activities
Services
Content Providers
Broadcast Receivers
Activities are represented by a single graphical user interface and its attached coding. Each section in an app represents a solitary activity, with each activity paving the way to the next one if it requires additional functionality. Activities can be started by other activities in other apps even, by use of Intents, for example by taking a picture with the camera, and asking for it to be shared using the Facebook app. Any such entity is a subclass of the Activity class.
An activity’s status is mostly described by its current position in its lifecycle. This lifecycle is a set of states that the system puts activities in using specific callback methods. It is most-often visualized as a hierarchical step pyramid, with each step representing a different stage, each stage being connected to the others through the various callback methods and having the top of the pyramid the stage at which the activity is running on screen, having full priority and being open to be interacted with by the user.
Fig. 2.1.1-2 – Simplified diagram of the Activity lifecycle.
The callback methods are used by the system to ensure that the activity can be dismantled once the user has finished with it, as well as generally managing its state so it would not function improperly such as:
Causing a crash should another application be switched to the foreground, or should the device receive a phone call.
Losing the cached data or progress within the app if the user opens another app.
Consuming system resources while delegated to the background.
Losing data, progress or crashing, should the device’s screen rotate between portrait or landscape.
Although there are 6 steps on the pyramid, an activity itself can only exist for an extended period of time in 3 of them, the rest being transitionary states. These 3 static steps are the following:
Resumed: sometimes called the “running” state, this is the stage in which the activity is run in the foreground and is allowed to be interacted with by the user.
Paused: a rather hybrid state, in this one the activity is only partially obscured by another application or activity. This other must be in foreground and is only partly transparent, or contains pieces that do not completely cover the screen (a perfect example would be Facebook’s Messenger app, which supports the usage of “chat heads”, that is bubbles specific to each conversation that appear universally on top of all other apps, and once interacted with, and so put in the foreground, act as an overlay on top of the preceding app, but do not entirely cover it). This state stops the activity in it from receiving user input, or executing code.
Stopped: this is the state of an activity once another has completely taken over the foreground or the screen. The original activity is utterly hidden, invisible to the user and is delegated to the background. While in this state, this activity does keep all of its cached data, information and progress, however it is not allowed to execute any code.
The other two steps within the activity lifecycle pyramid are transitionary ones. They are used to get the activity ready to be put within the main three steps without causing crashes or losing progress, but the activity is not designed to remain within these two steps. The first one, the “onCreate()” is executed the moment the activity is launched, and it then immediately calls the “onStart()” method, which itself then leads directly to the “onResume()”. These are all mostly invisible to the user, and even though a developer has access to them and can modify instances of them specifically for a certain activity, forcing an activity to remain within these states causes the Android System to shut down the activity.
As mentioned above, when an app is selected within the Android UI, the system executes the “onCreate()” method specific to the activity that the app has designated the main activity, or the launcher activity. This represents the gateway into that application’s user interface. Interestingly enough, an application lacking these launcher or main activities will not populate Android’s App Drawer and will be completely invisible to the user. These activities are defined within the Android manifest file, which shall be discussed shortly.
Services are elements that are permitted to run in the background, in order to execute coding operations designed to not stopped once an activity has been displaced from the foreground. Owing to this, there is no user interface attached to a service. An example of such a component is any number of applications designed to be able to save and run progress in the background, or any music player that continues to play while the screen is turned off.A service’s functionality is increased when another component is bound to it, allowing the service to execute I/O operations, network exchanges or interaction with any number of other entities, while still remaining in the background. Just like activities, a service is also able to exist in several states, these being:
Started: designates the service that is called by an active component of the application through the “startService()” method. In this state, the service is meant to execute its code in the background until its end, even should the activity or the entire application that called it be destroyed. Once the service has finished its programmed task, it should stop itself. If not, the Android system is designed to periodically clean its background services in order to free up memory and prevent resource leakage.
Bound: the state of the service once an active component calls it using the “bindService()” method. This state opens the service to being interacted with by other components, usually active, such as requests and replies. IPC (Interprocess Communication) allows them to do so even across multiple processes. Since the bound service is a platform for the other component to run in the background, it will only run until the activity bound to it will finish its program and end. When more than one component is bound to a service, all of them must either unbind or be destroyed, for the service to be discarded.
Several core methods are used to manage a service and handle its lifecycle. The “onStartCommand()” method is the actual starter method, the way by which other active components request the service to be started. The “onCreate()” method is then called by the system in order to execute the start-up setup actions – in effect this happens after the “onStartCommand()” is called, but the method is actually executed after the “onCreate()” has finished its setup. The “bindService()” method is used from within a different component, and is the way by which that component requests to be attached to the service. When this method is called, the system calls and executes the “onBind()” method, similar to the “onCreate()”, to manage several setup processes before it allows the “bindService()” method to actually run. The “onDestroy” method is called and executed by the system when the service ends and it should be destroyed. Within each service’s particular implementation of this method, code can be executed in order to save the service’s progress.
A notable observation is that Android employs a memory management routine that scans and destroys service if memory is low and the foreground activity requires more of it. This works using a hierarchy of priorities, with the system lowering a service’s priority step by step if no active component interacted with it recently.
In terms of parent classes, a service can extend either the Service class or the IntentService class – which is, itself, a subclass of Service. The Service class is the parent to all services, and most of the time the class to be instantiated when a service is being programmed. IntentService is used to respond to service start requests sequentially, easing implementation if the desired service is not required to perform more than one request at the same time.
Fig. 2.1.1-3 – Service State Diagram, showing the methods called to transition between states.
Content Providers are Android’s way of managing application data and information. These can be saved as files in system file tree itself, as an SQL database or various other medias that allow persistent data storage. An application’s activities or other components can use a content provider in order to request access to certain data. The content provider is then programmed to decide whether to answer the request with the data, or deny access. Perhaps the most well-known content provider is the one tasked with handling users’ contact information. Any application wishing to query, modify or delete data related to contact information must first call this specific content provider, which in turn will verify the app’s permissions and decide whether to respond or not.
The data itself is presented upon request and admission relational tables of structured information, relaying different pieces of information for each instanced object requested. For example, as shown below, the content provider containing data about the user dictionary saves the different spellings of uncommon words that the user wishes to hold.
Fig. 2.1.1-4 – Table held by content provider showing user dictionary data
Broadcast Receivers are components specialized in listening and replying to broadcast call. While many of these come from the system itself – common broadcasts include notification that the screen turned off, power is low, or that the camera has taken a picture –, applications and active components can also create broadcasts. This is usually in relation to other apps or components, to announce that certain tasks have been finished and that certain resources are now free to be used. While broadcast receivers are akin to services in that they do not employ user interfaces, their nature allows them to populate the notification center, if the broadcast is set to be announced to the user as well.
All broadcasts pertain to one of two broadcast classes. A “normal broadcast” is fully asynchronous. This is a typical, general type of broadcast, as all receivers are executed in a random order, many times even at the same time. While resulting in increased efficiency, this also means that certain methods and API’s cannot pinpoint specific receivers. The second type is called “ordered broadcast” and is represented by the delivery of broadcasts to one individual receiver at a time. This allows the analysis of the result and the forwarding of it to various other receivers pertaining to other components.
Besides these four major application components, an Android app is composed of various other classes and objects, such as views, layouts or fragments, each having a specific task to fulfill in order to give the application all the functionality it requires. One of the most important objects used is the Intent. This object is the first and foremost method with which activities or components can request access or action from another component of the app. Generally speaking, an intent serves one the following three key uses:
The Call and Execution of an Activity
The Call and Execution of a Service
The Delivery of a Broadcast
Since each activity is in effect a single screen in an application, as discussed above, it requires a way to switch to another screen in order to present additional functionally. This is achieved via such an intent, which as an object is passed by the initial activity, to call the “startActivity()” method, in turn opening and executing the second activity. The intent itself is also able to pass any relevant information towards the following activity. An intent is also used when starting and binding services, allowing the application to carry the data necessary for the service’s code execution. After the service is finished, its result is also carried back to the main activity via another similar intent.
Each Android application must also contain an Android Manifest file (named AndroidManifest.xml). This is an actual file in the app’s root directory which supplies the Android system with all the necessary information to run the app. The file most importantly contains the following:
The application’s Java package name, which serves as a unique ID for the respective app.
Details regarding all of the app’s components described beforehand, where many necessary settings for these components are set and saved.
Information regarding the Android version used when developing the application as well as the minimum such version required to run the app.
Lists of all libraries used within the application.
Android Wear OS
Android Wear is Google’s official operating system for smartwatches. First announced as recently as 2014, it is designed as an extension of the smartphone, delivering information and allowing data and commands to flow both ways. The OS integrates fitness as a core part of its features, allowing for step count, GPS tracking and heartrate measurements. Since it’s designed from the grounds-up as extending a smartphone’s control and interaction, while some manufacturers implement cellular support for their Wear smartwatches, the device still needs to be connected to a smartphone for a complete feature set.
The OS’s two main native implementations of communicating with a mobile smartphone are the integration with Google Now, and the use of smart notifications. With Google Now, the user can issue commands to the watch, either through a specific app interface, or globally, and the watch will relay that information to its paired smartphone. The other integral part is the watch’s use of the original notification center on the smartphone itself. While the watch does use local notification, its primary interaction with the phone is mirroring all notification alerts from it, and presenting the user with a few limited options with which to reply to those notifications. Increased functionality can also be implemented, giving exclusive actions to the watch regarding those notifications, that do not appear on the smartphone.
Fig. 2.2.2-1 – Example of difference between the notification on a smartphone, and its mirrored implementation on the Android Wear OS
Regarding the OS itself it is very important to note that it is built fully as an alternatively styled Android OS. In essence this means that on the watch, runs a reduced, but fully functional, Android OS, capable of theoretically running any app on the phone, since Wear OS is compatible with all of Android’s libraries. The limitation is the hardware, which is incapable of running many high-performance tasks.
Consequently, the primary differences between Wear apps (called “Wearable Apps” by Google) and Android apps are the following:
Wearable apps must be reduced in size and functionality compared to their handheld counterparts. App data should contain solely what is necessary for the app to function on the device, and not to include superfluous details and actions. Extended processing and code execution should be done on the handheld through the use of services, with the Wear device’s sole contribution being the delivery and reception of data and results.
As of Wear version 1.5, wearable apps are not downloadable directly into the watch, instead designed as an extension to a full-fledged handheld app. The wear component is installed automatically on the install of its “big sister”.
While wearable apps technically have access to the entire android system libraries and API’s, there some that are off limits due to its limitations, chief amongst which being android.webkit, as the watch completely lacks the hardware strength to browse websites.
There are also software features specific to Wear devices, such as its use of Ambient mode. Because the watch itself must be small enough to not be cumbersome and uncomfortable wrapped around the hand and so it can only support a small battery capacity, it needs a mechanism to reduce power drain. For this purpose, Google designed and implemented a state for the device to transition to when it becomes idle, when the hand returns to a rested position through the typical gesture or when covering the screen with the palm. For apps capable of such functionality (mainly watch faces, i.e. the watch’s equivalent of the Home application from a handheld), the two states are interactive and ambient. Interactive permits the full usage of the device’s hardware platform, with complete color range and graphically accelerated animations. Ambient mode shuts down the screen brightness to the bear minimum capacity, stops any input during this mode, as well as reducing the color range to grayscale and disabling hardware acceleration.
Communication between the Wear device and an Android phone is generally performed through the Data Layer, through a Bluetooth connection. The layer itself is an API supplied by Google, through their Google Play Services, that enables access to the data exchange link between the devices. Data can be sent back and forth through this connection using either the Wearable Message API or the Wearable Data API. The Message API is built and optimized to share messages containing single text strings up and down the protocol stack. When an app requires the transmission of more complex data, this is usually encapsulated within a DataMap object. This object – which contains a set of various data types, organized as key/value pairs – is used by the Data API to send such information through the communication protocol. The object itself moves across the stack in the same manner as the message object.
Fig. 2.2.2-2 – Diagram displaying the Wearable Message API protocol stack (Left) and the Wearable Data API protocol stack (right)
Web Server Protocols and Communication
In terms of communication, the most widely used form of transporting information across the network is the HTTP – Hypertext Transfer Protocol. The protocol itself is on the highest level, the 7th layer, on the OSI model, representing the Application level and employs mainly the TCP internet transport protocol, although it can and has been adapted to be able to function on unreliable transportation such as UDP packets – called HTTPU. It was designed first and foremost, and is still widely used since its inception, for the distribution of hypermedia information.
Historically, the first case of HTTP implementation was in the first test case for the World Wide Web – notably, this first protocol version supported only one method, specifically GET, in order to request a webpage. The protocol would go on to be published as a guideline in 1991 as version 0.9, with the most widely spread version, 1.1, being complete and documented in 1999. Recently, in 2015, a successor to this protocol has been designed and released, namely the HTTP/2, but it is still nascent.
What makes HTTP ideal in a client-server environment is that it was designed from the grounds up as a request-and-reply type of communication protocol. That is, the protocol functions by having a client – a mobile phone application for example – submit an HTTP request to server across the internet or network – a web hosting server for example. The latter then processes this request, and depending what is asked, responds either with data and resource, such as web content in the form of HTML files, or information in the form of another HTTP message, this time called a response message.
Since the protocol is meant to be used on reliable network transmission protocols, i.e. the TCP, it requires a session called the HTTP session, represented as a closed set of request-and-response transactions between the same client and server. The session is always started by the client, who issues the first HTTP request, using a TCP connection to a certain port – most often the standard internet HTTP ports of 80, 8080 or HTTPS port 443. The server then replies with an acknowledgement message, followed by the requested information. A particular type of session is started with an HTTP authentication scheme. These are already built mechanism by which the server can challenge the user before delivering the requested information. Some authentication schemes are Basic Access Authentication or Digest Access Authentication.
As mentioned above, the main way by which the request-and-reply system works is using methods. These are specially designed to represent an action that the client wishes to perform. The first method, as stated before, was GET, followed by POST and HEAD in version 1.0, and finally by OPTIONS, PUT, DELETE, TRACE, CONNECT in version 1.1. What is important to note is that although these are the official methods of HTTP communication, the protocol does not restrict the definition and usage of custom methods, nor does it put a hard limit on the number of them. As such, a client-server system can be developed to use any number of custom methods to suit the developer’s needs. The official methods are as follows:
GET: the primary method of HTTP, designed to request and retrieve hard information. Data is delivered by way of objects called entities. The method can be used conditionally, by way of a special header field called “If-Modified-Since”, which requests that the information be supplied only if it has incurred changes since a specified timeframe.
HEAD: designed to provide a reply with the same headers as GET, but without its body, providing efficient pieces of information.
POST: used by the client itself to send a piece of data to the server. It is designed to carry entities such as extensions to already present resources on the server side, or even new resources or blocks of data. It is the primary method for the client to deliver information to the server.
OPTIONS: used by the client to request information related to what options for communication the server has. In essence, this method demands a reply with the available methods that the server is programmed to listen and respond to, but it can also provide information about the server’s capabilities or its requirements.
PUT: method by which the client declares a resources, and requests that server append the data attached to the message to that resource. If the resource is not present on the server side, it should be created and supplied with the data in the message.
DELETE: this method is used by the client to provide the identifier for a resource and then request that the specific resource be deleted from the server’s database.
TRACE: used by a client to request an exact reply to his message, in order for it to analyze the data as it is received by the server and how it was modified along the way.
CONNECT: request by the client to switch the HTTP session from a web transaction, to a SSL tunnel.
Regarding the actual programming of the server, among the most used forms of scripting, is PHP. Originally an acronym for “Personal Home Page”, it currently stands for “PHP: Hypertext Preprocessor”, making it a recursive acronym. Its most distinctive feature is that unlike other scripting protocols, PHP can easily be integrated and embedded right into HTML. As described earlier, Android works by binding XML files, which represent the graphical display, with java files, representing the actual algorithmic programming. Unlike Android however, PHP defines itself as having one single file, with lines of both HTML and PHP right between each other.
Fig. 2.2.3-1 – PHP scripting, showcasing its distinctive capability of being integrated within HTML lines of code.
This allows PHP to offer a much more compact programming than other languages like C or Perl, which rely on printing large amounts of HTML text in order to integrate programming logic within web pages. Another useful trait of PHP is that the coding itself exists on the server, and although it is written there between HTML text, the processing itself produces purely HTML content (since the PHP coding is in effect the though-process of a server), thus delivering to the client no information whatsoever regarding the server’s programming structure.
Generally speaking, PHP is a wide-range programming language, capable of virtually any task. Since it doesn’t necessarily need a server, but can function with simply a PHP parser, it can work simply as a command line scripting platform, on either Linux or Windows, supporting easy to build processing scripts. Moreover, due to its open nature, PHP also supports extensions which heavily increase its functionality. There are even extensions adding graphical libraries, allowing the language to code actual desktop programs, complete with a graphical user interface. While not ideal, nor the most efficient way, it is entirely possible.
First and foremost, however, PHP is a server-side scripting language. A system like this requires the aforementioned PHP parser to interpret the code as well as a dedicated web server to run it. A client can then take the form of a web browser in order to access the server’s HTML output. PHP being as open as it is, bears no relevance to the operating system. It is run on a server machine, like Apache, that can be installed on either Linux, Windows, Mac OS, or even a RISC OS. While already mentioned that PHP can output purely HTML content, it is by no means limited by it, capable of producing any number of file types, from PDF’s to images, to even video and audio data generated live. In terms of database usage, PHP is capable of supporting most formats on the market, including SQL, having specific extensions to deal with them.
In terms of the scripting itself, as evidenced by the previously attached figure, PHP delimits its processing instructions by use of the “<?php” and “?>” key structures. The installed parser will only interpret and execute code within these characters. This allows PHP to be inserted within web coding, and still be recognized by the processing machine. Specific to PHP is its use of variables. Identified by the dollar sign, the language does not need a type to be specified for them, instead instantiated as whatever their assigned value represents. This is through the use of “type hinting”, which permits methods and functions to automatically assign their parameters as whatever their coding requires.
Regarding actual syntax, PHP is mostly similar to any other C-like programming language, with specifics such as the “echo” command that outputs text, or the block comment delimiters “/*” and “*/” respectively, as well as the line comment delimiter “//”. Logic and statements are ended by semicolons, as the language interprets line breaks and new lines as whitespace and disregards them.
Originally, PHP programming was composed of mainly two elements, variables and functions. Variables have a dynamic data type, as mentioned above, being able to be easily converted between different types, such as integers, float, even Boolean values. The latter interpret zero values as false, and anything else as true. Additionally, PHP supports the use of resource-type variables. These represent all the special data types created by extensions, and so handled only by the specific functions in those extensions. Examples include audio/video files or database values. Functions represent the methods themselves that perform logic operations, the same as any other programming language. Recently, support for object-oriented programming has also been added to PHP, allowing it to create and manipulate objects, adding an additional level of programming to ease the developer’s job.
Very important for PHP’s functionality is its support for web frameworks, most importantly the CakePHP open-source extension. Frameworks themselves are in essence a collection of libraries and templates to ease the construction of various web applications and websites by providing all the necessary tools already optimized for their specific tasks. Most of these frameworks utilize the MVC (Model-view-controller) architecture, including CakePHP itself.
This architecture itself represents a type of pattern used to implement user interface and interaction on client-server systems, originally used as a guideline for most desktop programs, but currently extended to websites and web applications. To simplify the entire transaction, it separates the server system into several components, each designed with a different task to perform.
Traditionally, the MVC architecture splits the server into three components, listed as follows:
Model: the core of the application, contains its inner programming language. As such, it is the one responsible for the direct manipulation and management of data.
View: somewhat similar to Android views, this represents the actual building blocks of the graphical user interface, ranging from text, to images, to more complex charts and graphical data.
Controller: the transparent interaction between the user and the server, it receives input from the client, translates it and forwards it to the model.
Fig. 2.2.3-2 – Basic diagram of the MVC architecture.
As stated beforehand, CakePHP is one such framework built around the MVC architecture and written, naturally, in PHP. Originally developed by programmer Michal Tatarynowicz in 2005 as a simple application extension within PHP, due to its open-source nature it was quickly picked up by various other developers and designed into one of the most well-known PHP web frameworks.
Cake itself provides the user with an all-encompassing toolbox to facilitate the usage of all its features. The framework also employs the “Conventions Over Configuration” design, in that it already presents a set of various conventions for objects, classes, file and database names, and many other elements. This allows the user to focus less on configuring the building blocks, and merely know which ones to use, as they are already pre-configured.
As part the MVC architecture, the Model layer of Cake is the core of the application, the layer that implements the actual algorithmic logic. It receives structured data and processes it as designed by the developer. For example, it could mean user authentication and validation, audio/video processing, or simply searching and querying a database. The View layer represents the graphical representation of the data processed by the model. It presents this data using any number of elements, usually in HTML, but it is open to various extensions. This allows it to deliver information in extended formats such as XML, JSON and any other format. The Controller is the actual listener service for the user. It handles the requests from the clients and is responsible for their processing into usable information and forwarding this to the model. Particular to Cake, the controller serves also a general-purpose manager of all process requests within the framework, not only deciding which model receives which piece of information, but also handling the supplying of the model’s data to the correct views.
Fig. 2.2.3-3 – Diagram of CakePHP’s MVC model, showing its particularities.
Chapter Three. Project Implementation
Hardware
While the ideal physical support for a website or web application has been discussed as being a powerful, datacenter-grade server, containing Xeon-level processors and EEC memory, for the purposes of this project, a powerful enough laptop was deemed as suitable. Specifically, an Asus ROG G751-JY high-performance gaming laptop was used.
The most significant detail, and having the most impact on a website and/or web application’s performance, is the main processor. In the current case, the laptop is equipped with an i7-4710HQ Haswell-generation processor. Launched in 2014, the CPU sports 4 core, multiplied to 8 logical threads through the use of Intel’s proprietary Hyperthreading technology. As evidenced by the benchmark below, while still a laptop processor, the 4710HQ is powerful enough to compare itself to its desktop counterparts.
Fig. 3.1-1 – Various synthetic benchmarks comparing the i7-4710HQ, a mobile CPU, with the i7-4770, the equivalent top-range desktop CPU.
Other noteworthy specifications include the 8GB of RAM capacity – not ideal, but sufficient for the task at hand – and the SSD storage support – allowing the web app to not be bottlenecked by slow transaction speeds. The operating system running on the platform is a Windows 10 Professional edition, compatible with all the needed software necessary to run the web server.
On the mobile end of the system, a Sony Xperia Z5 was used as the Android smartphone platform. The Android application itself shouldn’t and isn’t at all hardware intensive, so the mobile phone’s chipset is more than sufficient to run it. Specifically, the Z5 is equipped with the previously discussed high-performance Qualcomm Snapdragon 810 chipset, delivering all the power needed.
Other notable features are 5.2 inch, 1080×1920 IPS LCD display, capable of utilizing Android’s high-DPI application profile, as well as its mounted 32GB of internal storage and 3GB of RAM storage, enabling the entire chipset to perform optimally. In terms of software, the smartphone is running Google’s latest Android Marshmallow 6.0.1 version (Android 7, nicknamed Nougat, has been announced, but has thus far not been implemented on any device). This allows the device to be fully compatible with any version of an Android application, supporting all the recent libraries, toolsets and app components and functionalities.
On the actual wearable side of the system, the Huawei Watch was used. This was the company’s first foray into the smartwatch market, being relatively late compared to its competitors who have already released several iterations of wearable smart devices, either sporting Google’s WearOS or having proprietary operating systems.
The watch sports a relatively high-dpi screen of 1.4 inches, 400×400 pixel resolution, managing a 286 PPI (pixels per inch) display density. Although low compared to high-end smartphones, the device is severely limited by the available power the SoC can deliver. Huawei also chose to give up the LCD screen generally chosen by smartwatch manufacturers, and instead used a P-OLED (Polymer-Organic Light Emitting Diode) display. As discussed beforehand, this technology allows the watch to selectively turn on only a handful of pixels in order to deliver a minimal display at significantly reduced battery drain. In practicality this allows the utilization of Android Wear’s Ambient Mode functionality without majorly impacting battery life.
In terms of performance, the watch is identical to almost all Android Wear current-generation smartwatches, being equipped with the Qualcomm Snapdragon 400 SoC. This chipset’s CPU contains four Cortex-A7 cores, clocked at 1.2Ghz, alongside a similarly low-power GPU, the Adreno 305. In terms of memory, the watch contains the standard 4 GB of storage and 512 MB of RAM, common to all android smarwatches. Regarding software, the watch runs the latest Android Wear version, 1.5, likewise supporting any app feature introduced to date.
Finally, perhaps the most important feature of the phone, is its usable heartrate monitor, in essence the core of the entire project.
Fig. 3.1-2 – Illustrations of the laptop and smartwatch platforms.
Software
The software implementation represents the actual programmed implementation of the entire project, specifically the Android application (since the Wear app is seen as an extension of the main smartphone application) and the web application that acts as the server.
Android Wear Application
The entire Android application was coded using the official IDE (Integrated Development Environment), the Android Studio. While ADT (Eclipse Android Development Tools) used to be the main Android IDE, it was discontinued during late-2014 to early-2015.
Besides the IDE’s native support for Android, certain factors were prioritized the use of this IDE against other commonly known ones. The program uses Gradle to build its applications, which is an automation compiler tool characterized by its optimized patching and refactoring. The tool uses a tree-based compiling structure, allowing it to quickly pinpoint out-of-date elements and update only those, a much faster process than updating the entire application.
Other features specific to Android Studio are its vast range of application templates, giving a head start for virtually any app, and most importantly, its native, out-of-the-box support for Android Wear application extension. This is represented by IDE’s building of all necessary files for a Wear app, allowing the programmer to concentrate only on the code itself.
Starting from the Wear side of things, the application itself is an extension of the smartphone’s app, being programmed within the same project. Inside the Wear app’s directory within this project, are the resources used by the app extension, the compiled build components, as well as the original source code. These are structure as the “build” folder and “src” folders respectively.
The src folder contains the actual programming of the app, that is the “main” folder that contains all the programmed logic, as well as a few files used by the Android system as well as the Android Studio program. Inside the “main” folder are two subfolders containing the two, separate, underlying infrastructure of an Android app: the java files and the XML files. Besides these, the AndroidManifest.xml file is also located here, discussed previously as the file containing all information about the application’s modules.
Fig. 3.2.1-1 – Android Studio snapshots revealing the “wear” folder’s structure.
The “drawable” folders represent the image resources used by the app. In this case, there is a single image, the application icon, optimized for different dpi profiles.
The “values” folder contains the “strings.xml” file, which is a repository for all string pointers found in the app. Generally speaking, Google discourages the use of hard-coded, static strings within the app, and instead urges the use of string pointers, whose values are stored here. This is because in this way, multi-language support can be implemented, where the pointers within the app can be resolved in a multitude of ways, depending on what system language is set on the Android smartphone. This functionality however was not implementing within this project.
Fig. 3.2.1-1 – Snippet of the string.xml file.
The last folder within the “res” directory, the “layout” folder, contains the actual graphical user interfaces displayed. These are all xml files and the first of them, “activity_my.xml” is the first layout loaded when the application starts. The activity is characterized by pertaining to the “WatchViewStub” child class. This specific type of view permits the usage of different types of child views depending on the shape of the device. In this case, the ViewStub contains parameters to distinguish from rectangle or round watch shape.
Fig. 3.2.1-2 – Activity_my.xml file, showing the two different layout options.
Either one of the two layouts, the “rect_activity_my” or the “round_activity_my”, is the main layout activity on the watch, in the current case being chosen the round activity. These layouts each contain 3 distinct views containing text (called TextViews) tasked with specific things. Each of the views are uniquely identified by an ID under the form of “@+id/view_name”. The first view is the “rate” view, tasked with displaying the read heartrate. The second view represents the accuracy adjustment, and the third view relates information about the sensor. All of these TextViews also contain parameters used to size them and place them on the screen. After all of these 3 textboxes there is also a button, called a ButtonView. Identified as “ss_service”, it controls the start and stop of the service tasked with recording the heartrate.
Within the “java” folder of the “main” directory lies another subfolder which itself contains the java elements that perform the logical programming of the application. In this case, there is a java activity and a java service.
The java activity, called “WearActivity.java” is the main activity of the application, and is called at the launch of the app. The file itself first contains a package declaration, which is the way by which the application is uniquely identified within an Android system. Next are the libraries imported and used by the application, after which the actual activity is declared and initiated, starting with the global variables.
After that, the connection to the service is created, through the “ServiceConnection” class, and the “onServiceConnected” and “onServiceDisconnected” methods are overridden. These methods are called automatically as the service is started and stopped respectively. Here, the service starting method instantiates the binder object and attributes its service to a variable, “mService”. The “mBound” value is also declared as true, signifying that the service is bound. When the service is stopped, the “mService” variable is set to null, and the “mBound” variable is set to false.
A broadcast receiver is then instantiated that listens for the broadcast sent by the service, and extracts the “WEAR_MESSAGE” extra value from that intent, and sets that value to the “rate” TextView.
The actual implementation of the activity is then coded, through the “onCreate()” method which is called as soon as the activity is started. This starts with the instantiation of a ViewStub and the binding of it to the main layout stub. A listener service is then used to receive notification about when one of the two sublayouts have been inflated. When they have, the “rate” object is being bound to the “rate” TextView, and its text is being set as a default “Reading…”.
The button is then programmed. When it is clicked, it first checks if the service is running, through the “isServiceRunning()” method implanted at the end of this class. If the service is running, its text is set to “Start service” and the service connection is checked. If this is also running, then the bound service connection is terminated, the “mBound” variable is set to false, and the service itself is stopped. Otherwise, the button’s text is set to “Stop service”, the service itself is started, the activity is bound to the service and “mBound” is set to true.
After this, the broadcast receiver instantiated above is registered to this activity, and its filter is set in order to receive messages from the “WearService” service, inside the “onStart()” method. The same receiver is set to be unregistered when the application is stopped.
Finally, the “isServiceRunning” method is implemented, which checks if the service is running.
The second java file, the “WearService”, starts off similarly to the main activity, listing the imported library files. It then declares itself as an extended “Service” class implementing the “SensorEventListener” class, followed by the various variables used within this class.
The first method is an overridden “onDestroy()” implementation, which unregisters the listener service and disconnects the connection to the Google API service. This service is what actually permits the app to access the smartwatch’s data layer, and so transmit data across activities and across devices.
Then the “onCreate()” method is modified to implement the actual programming. First of all, the connection to the Google API service is opened. The next method is irrelevant to the application, it is an override of the listener tasked with managing failed connections, implemented solely for logging purposes. Finally, the actual Wearable API is added to the Google API service connection so that the application may use it.
A new thread is then created, instantiated as “tt”, that first and foremost checks if the API service is created and if the app is connected or connecting to the service. The thread then uses a “NodeApi” object to scan for connected nodes, which is to say connected devices, i.e. a smartphone. The results are then put into a list object named “nodes”, which is then set to list the node ID of the first device found. The connection to the API service is then stopped, as the app is done with the thread for the moment.
The sensor manager and the sensor themselves are instantiated, and the manager is set to listen to this specific sensor.
Lastly, the app implements a method which updates the value of the heartrate if it has changed. This is an override of the already designed “onSensorChanged()” method. Firstly it checks if indeed the value has changed, and then restarts the “tt” thread, which again restarts the Google API service connection if it is down. It then collects the value from the sensor and turns it into a bytestream, which is send to the designated node, via the Google API service. The same value is then sent as an Intent to the “WearActivity” activity, in order to be displayed on screen.
Android Mobile Application
Since the full mobile application is an actual Android application, everything mentioned previously about the Wear side of the project is also applicable here, from the IDE and building of the app to the folder structure itself. Since it was discussed that the “src” folder contains the actual programming of the app, we’ll start from there.
Inside this folder, there are the two folders discussed previously, the java folder and the resources one. Inside the “res” folder is most of the information needed to display the app, to send information and render it on the device’s display. Most important here are the layouts, of which there are two: “activity_auth.xml” and “activity_main.xml”. The two activities are fairly simple, serving only to match entities on the display to objects in the java files. The authentication activity is the first one that pops up when launching the app, it uses an authentication form to collect the information needed to connect to the server, which is the Server IP, Username and Password. It then has a simple button attached to the login function. The user is then directed to the main activity, which holds a textview with the heartbeat and reference to the linechart that will display the heartbeat historically.
The java folder itself contains the many classes used in the project. First and foremost there is the MainActivity class, which is the primary activity in application and the one attached to the “activity_main.xml” file. It starts off instantiating all the data it needs, from a simple TextView, to a broadcast receiver for the service, as well as a database object (which will be discussed shortly) and chart data. On being created, this activity instantly looks up the database to get heartbeat values without waiting for a specific input. It then uses these values, stored in an array to populate the chart that will show the historical heartbeat information. After, it edits the TextView at the top of the display, to show the current heartbeat and also starts the receiver that will get the heartbeat information through the listener service. Two methods are then coded, to start and respectively stop the receiver for the listener service. After this, the method to update the chart is created, in essence it uses the same logic as the segment from the onCreate() method in order to collect the information from the database, store it, use it in a dataset, and then use the dataset to populate the chart. Methods to inflate and interact with the options menu are then programmed. To be more specific, there is a toggle for the live listener service (that uses the two methods to start and stop the receiver mentioned above), settings to switch between day, month and year heartbeat classification and a button to logout from the current instance of the application (which will forward the application to the AuthActivity class). Lastly, a customized version of the public AxisValueFormatter class is used, which the chart employs in order to display its data.
The first activity that the user is actually presented with is the AuthActivity. This one handles, as the name implies, the authentication itself with the web server. In the beginning, the activity clears any possible leftover objects from previous executions and also instantiates the UI elements that it will use. When created, it will employ this UI elements to populate the associated XML with the from fields required for validation, which are: Server IP, Username and Password. It also links the sign-in button and the progress animation to their respective xml objects. Of note next is that the onResume() method is overridden, so that when the activity is resumed, the app will try to authenticate with the web server using the saved token. Next up, the method used to attempt the authentication is coded. This one first loads the information the user inserted in the form fields, inside corresponding variables in order to use them. It then checks to see if the username and password fields are empty, as they are not allowed to be and finally instantiates a new UserSignInTask objects (class that will be discussed shortly) and executes it. A standard, public method of animating a loading icon is then coded, as per the official instructions on the Android Developer website. Finally, the two classes of authentication are coded, one that uses the username and password written by the user in the form, and another that uses the token received from the website. UserSignInTask is the method that uses the form information, and it extends the AsyncTask class. It first creates the variables for the form data and defines how these variables will be passed as input to the class. In the background, it then calls a ServerConnector method (a class that will be discussed shortly) and passes the form data as parameters, expecting in return the authentication token from the web server. This token is then saved on a file using the Settings class’s methods. If the authentication is successful, an intent for the main activity is generated. The final method, the authentication with token, progresses in a much similar fashion, save that instead of submitting a username and password through the ServerConnector method, it instead sends the token and expects a response based on that.
Two slightly different classes inside the java folder are the Heartbeat and Settings files. The former simply because it’s programmed for ease of access to heartbeat objects and only contains the fields necessary to correctly define a heartbeat. The latter, because it’s a class used for the purposes of saving and retrieving information stored locally. It contains methods to write files on the local disk, and also to access them. It is this class that is used to hold the authentication information used in the other classes, specifically the token, server address, id and also has methods to access or delete these files.
There are also two classes that both deal with configuring and manipulating the database that the application uses to store all the heartbeats. These are the DatabaseContract and DatabaseHandler respectively. The former simply holds variables to easily manage SQL operations, using data for table name, column values, date created and sync status. The DatabaseHandler however is a class that extends SQLiteOpenHelper, a public class used to manage as easily as possible sql databased within an android app. Its onCreate() method is overridden to generate an sql table with the fields configured in the DatabaseContract class. Afterwards, the methods used to manipulate data inside the table are defined. First, it’s the addHeartbeat method which takes the value and sync status as parameters, puts them in a ContentValues files instantiated as values, and then uses the inherited ‘insert’ method to put this data inside the database and return the id of the newly created row. The next method takes as input parameter a heartbeat id and then sets the sync status column of that heartbeat id inside the table as true. The following method goes hand in hand with the last, as it is used to produce a list of unsynced heartbeats. An sql query is configured, asking for the id’s of all heartbeats whose sync status column is set to false. All these heartbeats are then written on a list which is returned by the method. The last method in this class is a general request for heartbeats, having input parameters as a limit and the period in which the heartbeats are requested. Depending on the period selected (which can be either day, month or year), a “WHERE” sql query condition is generated, defining either the current specific day, or a period of time beginning either at the beginning of the current month, or beginning of the current year. This condition is then used in an sql query that asks for all heartbeats that comply to it. All of these are then inserted into the list, and the list is returned by the method.
For the communication with the server, the app uses two activities, ServerConnector and HTTPConnector. The latter is a class designed to ease the transmission of HTTP messages and has a generalized method that can execute almost any kind of HTTP request, specifying the endpoint, request type, parameters and headers and is even able to direct both clear HTTP and HTTPS. The ServerConnector class uses the HTTPConnector method to send specific requests from the app to the server. This class starts off with a way to call it mentioning the server endpoint and then defines the login input method. It then inserts the secured information inside hashmaps and loads these hashes into the parameters variable. It then uses a HTTPConnector object to forward this data to the server. The next method is designed to receive user information from the server, by supplying it with the authentication token and an “X-Authorization” HTTP field. This field is the keyword that the server looks for when listening for requests that come from the app, with the token. Another method to rely on this header is the “sendHeartbeat” method. Since it would be unrealistic to have to provide a username and password every time the heartbeat is sent to the server, the app instead uses the token supplied by the server, attached to this “X-Authorization” field, for every heartbeat transmission.
Finally, there is the WearListenerService java file, that deals with the connection between the mobile phone and the smartwatch and collects the heartbeat information. At the beginning, it declares all the data it needs, including a ServerConnector object and a DatabaseHandler one. In the overridden initialization method, the class first tries to use the saved server address in order to log in using a token. Inside the “onStartCommand” method, which is executed every time the service is started, the app requests a list of all heartbeats that exist in the phone’s database, but have not been synced with the server. These heartbeats are then sent to the server. Next, the method that listens for messages is configured. Whenever a message is received (which is checked using Android’s native MessageAPI), a ByteBuffer object is created to extract the data from the message. The integer value from the message is then saved inside a variable and that variable is sent to the table using the DatabaseHandler object created previously. The same variable is also sent to the main activity to be displayed on-screen. If a token exists, the service then tries to send the heartbeat to the web server. If it receives a response that the action was successful, it also set the sync status of that particular heartbeat inside the phone’s database as synced. Lastly, the method used for the attempt to sign in with token is configured in the same fashion as for the AuthActivity class. If the attempt fails however, an intent is generated to direct the app to the Authentication activity in order to sign in with username and password.
Web Server
As discussed during the theoretical presentation, the web server in the project is coded in PHP, using the Cakephp3, which is an MVC PHP framework. The IDE used for the coding is PHP Storm. Worth noting is that unlike most other MVC’s, cake separates its usually unitary ‘model’ objects, into two objects: Entity and Table.
An Entity means referring to the object itself (in this case User, Heartbeat, etc.) and informationally, it represents a single row. Such objects usually contain global properties that you wish to share among all instances of themselves, as well as methods to edit, view or delete the information they hold. These manipulations of data within an instanced entity can happen in a few ways, first of which, and the simplest, is just using the object as it is defined. Another way is using the get() and set() methods which have direct access to the data.
Fig 3.2.3-1 – Examples of accessing and manipulating data within enties. Top: the set and get commands. Bottom: using object notation.
A Table refers to the actual database table and all actions directed at that table. Such an object, therefore, is necessary to interact with any given table. An advantage of cakephp is that it creates a Table instance whenever you need to use a table, giving the programmer access to the most widely use methods of interaction with such a table. When creating tables, certain conventions are used, such as the name of the table that the class will use, or the primary key that is expected, or the name by which the Table objects search for an Entity Class. All of these can be customized inside the “initialize” function, a method called at the end of the constructor. Its purpose is exactly this, to fine tune tables, without risking overriding the constructor itself, which could prove problematic if mishandled.
Within the project itself, the file structure is mostly automatically populated by the cakephp framework (a distinct advantage of using frameworks, and especially this specific one, as the programmer need only bother with programming the bits that need to be customized to the specific task). It contains the “bin” folder, first and foremost, which is specific to this framework and contains initialization data.
Next is the “config” folder, which contains the routing mechanism of the framework. The purpose of routes is to create paths to the various sections of the website, within the HTTP addresses themselves. The routes.php file holds all of these associations, in effect binding HTTP links to controllers and their actions. What is interesting here is that some routes aren’t simple one-to-one static associations. Rather, they can take certain data from the HTTP link, verify that it is in accordance to a desired structure (such as verifying that an id field only contains digits), and passing that data forward to the action specified in the route. In a specific case here, when accessing the “/heartbeats/:user_id” link, the server checks first that the user_id contains only digits, and then passes that value to the “index” action from the “heartbeats” controller. Inside the “config” folder are also the “app.default.php” and “app.php” files, which contain information the server will start to, practically the initial default settings. Of note is the mail transport functionality which is defined here. Since the server has the capability to send an email to the doctor in case a patient reaches critical heartbeat, the settings required to reach and authenticate on a mail server is set here (in this case, the settings are a simple smtp over tls connection to a mail provider). The “bootstrap” and “schema” files are defined automatically by the framework and contain framework specific data. Schema for example, contains data for internationalization, like languages. The Seeds folder contains initialized objects that the database should start with, which is a good way to start the web server and have some information already present in order to test its functionality (in his case, it contains three patients, each with the “patient” password). The last element in the “config” folder are the migrations. These function as links/translations between a model (such as user, in this case) and the SQL database. It is cakephp’s way of simplifying and streamlining database interaction, by allowing the use of php code for all interactivity with the database. In effect, for every modification on the table’s structure, there is a migration.
The other important folder is the “SRC” folder, where all the actual implementation of the code is stored. All the server elements described beforehand are here, meaning the actual MVC components and all custom classes. The ”View” folder here contains “AjaxView.php” and “AppView.php” files which are automatically generated by phpcake as default, but have not been customized. The same thing is true regarding the “Shell” and “Console” folders.
The first folder that the user interacts with, so to speak, is the Template folder, which contains the actual pages that present the information to the user. In a way, this is analogous to Android’s XML files. On one hand, in the background, you have the algorithmical logic of the program, and in the forefront you have the layout that is presented to the user. These layouts are all contained here, and reference elements from the computational files. These templates are mostly written in html, but contain php code as well. Furthermore, the templates are not singular, absolute objects. One template can contain another, and a web page can contain multiple templates at the same time. In the “users” folder, for example, are most of the webpages that the user has access to, such as the doctor listings, the index containing patients, pages to add doctors or patients, and the main view as well. This main view (configured as the “view.ctp” file) represents the webpage that the user is presented with after login, and also contains the graphing function and the JSON functions used to get access to the heartbeat information.
Next is the “Model” folder. As explained before, the model is characterized by an entity object – which represents the object itself, the user, a single row in the table – and the table object – which is characterized by information pertaining to that specific row. The entity can hold data regarding which fields are to be given public access and which to be protected, functions regarding database fields like passwords, and so on. In the table objects however, the relations between objects are defined, for example a user entity can have many heartbeat entities and many histories. In a table you can also code field validators, to make sure the information is correct. In our case, we have three entities: Heartbeat, History and User. The latter is fairly self-explanatory and is a pretty basic model, all it contains are settings regarding accessible fields – in this case, the ‘id’ is not accessible, and the password is not only inaccessible but also hidden – and a protected function that allows password setting and also hashes the password, so if someone were to hack into the database, they would only have access to the hash. Next is the History object, designed to hold the diagnostics history of the patient. This one is even simpler than the User object, it is simply a declaration of an entity model, with the default settings. Same thing is true regarding the Heartbeat entity. In a separate folder, we have the table objects: HeartbeatsTable, HistoriesTable and UsersTable. In the latter, the object itself is defined as extending the Table superclass and described with the table itself that it should be bound to, the primary key and the relation to the other objects (as mentioned before, a user has many heartbeats and many histories). Next are field validators, specifying which fields are allowed to be empty, which are mandatory and when these rules should be applied. The HistoriesTable object is created similarly, just with different relation rules. In this case, the histories are set to belong to the Users objects, and their foreignKey is defined. Validators are still mentioned regarding the fields in this table, specifically the treatment and diagnostic fields. The heartbeat table is coded almost identically to the histories one, belonging to the Users object and having some validators at the end.
Next, there is the Auth folder. This contains a single file: TokenAuthenticate.php. This class doesn’t conform to any MVC object format, it is simply a custom designed class in order to ease the authentication mechanism use in the controllers. It extends the FormAuthenticate class which is the authentication platform in cakephp, and then has two custom functions. One is the authentication function, and the other is a method to query the user based on the given token from the “X-Authorization” HTTP header. Should no token exist, the true value is returned, and should either no user, or multiple users exist with the same token, the false value is returned. Otherwise, the required user is returned.
Finally, the last folder inside “src” is the Controller folder, which holds controllers for all models declared previously and also a few extra controllers for other functions. Basically, this is the actual programming of the website, the logic itself. First of all, there’s the UsersController.php, which contains information regarding all actions on the user object within the web interface. At the beginning, there are filters used in order to restrict access to the actual page, until the user has authenticated. As such, the user will at first only see the authentication form and allowed the login and logout methods. Next, the index is rendered which is the element on the website containing the list of patients. Simple, raw functions are then declared, the ability to view a user and his associated heartbeats and history, the option to add another patient or another doctor (to be noted, only a doctor can add another doctor) and to see the list of all doctors. Options to edit or delete users are also present. The login function is lastly defined, using the Auth class which inherits methods such as “identify()” that checks the username and password and returns the user. Then, the user is checked to see if it is a patient or a doctor. In the case of the latter, he is redirected to the main page, otherwise the patient is shown his own personal view page, as he is not allowed more than this. Next is the HistoriesController, which similarly to the users starts with an index listing the history of a user and then moves to the general functions: a simple view function, an add function in order to update a patient’s history and an edit function to modify an existing history. A delete option is also available.
The HeartbeatsController.php is somewhat different, in that it is only designed to be used by the interaction between the webserver and the mobile app. As such, the filters only allow connections that present the X-Authorization header token, only allow the add function, and also memorize that user in order to categorize the heartbeat by the correct patient. An index function is also present, capable of categorizing the heartbeats by day, month or year, as requested. A default view function is listed and then the add function which instantiates a new entity of the Heartbeats object and attaches it to the corresponding user. Finally, the default edit and delete functions are described.
The AppController.php is somewhat of a master controller for the entire website. It is the only one that extends the Controller superclass, and all other controller in this project extend the AppController (with one exception, which will be discussed shortly). As such, it is the place to build global and streamlined settings, that all controller should have by default. At first, the “initialize()” function is defined, loading the required components, listing the redirects for the login and logout functionalities, and describing the format used to authenticate the users. This is also where the user is being categorized as doctor or patient, admin or not, and so to what resources they have access to.
Finally, the only other controller to not extend the one discussed above, is ApiController.php. This one is intended to be the controller that directly communicates with the mobile app. It has its own initialization, with its own components. Then, the login function is programmed, which checks the details used for authentication and if they are correct, dynamically generates a unique string, the uuid, and attaches it to a user as the authkey, afterwhich the entire user (including the authkey) is passed on to the mobile app. A getUser() function is then coded, which behaves similarly to the last part of the login function. Finally the adHeartbeat() function is defined, which instantiates a new Heartbeat entity with the data received from the mobile app, checks the user and then attaches the heartbeat to that user. Furthermore, it also checks to see if the value received exceeds the alert value and if it does, uses the Email() function to send a warning mail to the doctor.
Conclusion
This project, therefore, aims to bridge the inefficient gap between user and information, taking away the practice of sharing data through strictly physical means. All of this lead to several advantages and important points of progress.
First and foremost, queues at the doctor’s office will ideally be eliminated, as the bulk of people waiting in such lines, in fact wait for brief consultations regarding their diagnostics. The issue being that the doctor will only see and examine test results and findings at that specific moment, when the patient presents the file to them. Migrating this to a digital platform allows the doctor to examine most of what is needed, tests and such, directly from a web portal, without the person having to visit them at the clinic. This saves time for both parties and allows a much more efficient and streamlined process of providing a diagnostic and prescribing a treatment.
Moreover, it presents all of this information as easily accessible to the two parties as possible. Physical records must be picked up, collected, which in itself adds a useless delay, but also prevents them from being accessed so easily since they are, after all, physical objects. With the help of EHR’s, such data is supplied to the patient and doctor as soon as it is made available, eliminating any and all delay from the termination of the text results, to the access by the user. This also allows the person to look at the information at any time, from any place, without fear of losing it, damaging it, or having to keep it safe in any way. The information is inherently safe, since it is stored digitally and backed up.
Furthermore, linking a cloud platform to the collection of real time biomedical data allows the doctor to have a live overview of the patient’s health, minimizing the time of intervention in the case of an emergency. Since many times, a patient suffering from a crisis will not be able to use a phone and call an ambulance directly, the fact that the doctor can be immediately alerted to such a situation could potentially save lives. Moreover, it can act to prevent, not only react. The doctor can supervise a patient’s health, and notice unhealthy behavior and can then notify the patient of what needs to be changed. This of course does lead into an entire issue of privacy and surveillance, an issue quite present in the world today as it is, but at the very least this gives people an option. The option to be looked after and taken care of if their situation deteriorates to the point where they could not act otherwise.
The project itself does have limitations as it is, however. The precision of the heartbeat information relies entirely on the smartwatch device, not to mention that not all smartwatches do have a heartbeat sensor. The mobile phone as well must have constant internet access in order to send the information. If the phone does not have connectivity, or the smartwatch fails to pick up the heartrate, the information on the web server can reach it significantly late or altered. Furthermore, if the case is that the server platform is loaded with hundreds or thousands of patients, the very real issue of hardware performance appears. There is only so much processing power that can be cheaply put inside a server, and the more transactions it requires, the more database operations it needs the do, then the more powerful the machine must be. This can lead to significant investment needs to the hardware support of the web platform, which grows exponentially. Not to mention that ideally all this investment is doubled by the backup site, since that is an entire point of having digital records, for them to be safe in case of disaster. Finally, the most dangerous threat to a platform such as this remains a cyberattack. As they have grown more and more common, more and more targeted, the need for security has grown significantly in the last years. Anything that has access to the internet is vulnerable, and anything with databases containing civilian information represents an especially attractive hit for attackers. The entire infrastructure must then be secured, which adds an even bigger investment, both in time and finances.
As the basic premise of the idea has been described throughout the project, the potential for future development is even greater. The platform itself could be extended to support different kinds of doctors, different kinds of departments, in effect transcribing on the digital web site, the actual layout and topology of a hospital, offering patients intuitive access to exactly the doctors they need. The platform can also be linked to various institutes and clinics, allowing medical referrals and appointments directly from the website. All of this can also potentially evolve into an entire digital hospital, allowing communication and sharing of information between doctors and professionals themselves, all accessible from anywhere at any time, giving the patient the best possible care without delay. More than the server side of things, the infrastructure can also grow to encompass not only heartbeat collection, but an entire array of sensors, allowing, if needed, a full biomedical monitoring of patients, from blood pressure, to glucose levels and more. The web site access provided to the patient can then also be made into a personal health dashboard, with information about when to take medication, what to do in case of emergencies, offering a full, detailed overview of everything the patient needs. Going even further, contracting the services of a delivery company, the platform can also serve as an online pharmacy for patients to order the medication they need, and for it to be delivered directly to their home.
To conclude, in my opinion, such a project would be highly important in the future to ensure the efficient distribution of medical information in the hopes of offering better, faster medical services.
References
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: Chapter One. State of Art [302803] (ID: 302803)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
