CRIPTOLOGIE ȘI AUTENTIFICARE NEURAL-CUANTICĂ DISTRIBUITĂ ÎN APLICAȚII BAZATE PE LIGHT FIDELITY [304693]
[anonimizat](i) științific(i):
As. Dr. Ing. [anonimizat]
2017
[anonimizat] o [anonimizat]. E adevărat! [anonimizat].
Nu demult au fost produse primele unități de calculatoare ce implementează tehnologia cuantică. [anonimizat]. [anonimizat].
Trăim într-o societate în care aparatele smart devin din ce în ce mai predominante : [anonimizat], [anonimizat]-a ni se face foame. [anonimizat] A în punctul B din prorpie inițiativă. Avem tot ce ne dorim.
Dar chiar asa e?!
[anonimizat]-n stomac? [anonimizat]. Să încarce pe contul dvs. de Facebook tot istoricul din browser?
[anonimizat], ne par amuzante și poate chiar copilărești. Într-un scenariu mai sobru am putea spune că visăm la fantezii de tipul Star Wars. [anonimizat].
[anonimizat]-și recapete mersul de zi cu zi. Însă la fel de adevărat este și faptul că o echipă de cercetare de la Facebook lucrează necontenit pentru a-ți putea citi gândurile ultimului status cu mult înainte ca tu să pui proteza la încărcat.
[anonimizat]. Nu! Ce își propune în schimb este de a [anonimizat] – [anonimizat].
[anonimizat], dar care în lumea cuantică de mâine acestea vor deveni prăpastii mari.
[anonimizat], întoarce fila și intră în jocul căutării celui mai adânc secret din străfundul cutiei Pandorei.
NEURAL QUANTUM CRYPTOLOGY AND AUTHENTICATION DISPENSED IN LIGHT FIDELITY APPLICATIONS
Bachelor thesis
Alexandru PANDI
Supervisor(s):
As. Dr. Ing. [anonimizat]
2017
Abstract
Modern cryptosystems are endangered. We are to live the second renaissance in terms of computing technology. Quantum computers are no longer a dream but most certain a reality. [anonimizat], [anonimizat], for cyber security enthusiasts this is the beginning of the end of an era.
The modern cryptosystems as we know today are rendered useless bit by bit. In this context, it is of the upmost importance to find a quantum resistant recipe as quickly as we can, thus the Internet of Things – Our things will become estranged to us.
This paper was proposed as a starting point to deepen the analysis into the most vulnerable spots of out cyberworld today – and to implement an experimental quantum resistant solution.
Introduction
Dear reader,
I am a soon-to-be graduated student from the Computer Science Faculty. I am passionate about reading, understanding, questioning, innovating, coding and not in the last place – writing, and yes, in this paper it will be plenty of it.
Of course, you expect (as the manner tells us) to be impressed from the first chapter of this paper. And you are right to expect that! It is normal for a child to expect being impressed about the first words he/she can understand. But soon those expectations will be rendered to forgotten.
You see, this work isn’t about being impressed right away – it didn’t even make me impressed right away and I was stuck with it for a year almost. Cyber security and Machine Learning are two fields that unfold themselves to you, no matter to your original feelings.
Bu now, on a serious note – you are here because you are interested in the subject, or maybe into one of them two, or maybe you were just captivated about the title, you decided to step in – and find me – an annoying voice telling you what to expect.
This paper is about finding the most vulnerable spots of nowadays algorithms in terms of authentication and encryption, and seeking for the right recipe to put an end to data leaks over the world. Of course, this is an impossible dream, but how possible it is to prepare when not far around the corner stands a new technology (quantum technology) – waiting for you to be asleep believing that everything it’s sorted – to read your most intimate secrets.
Follow me though the next section to discover what made me star this project in the first place!
Section 1.1. motivation
Quantum computers are a reality. For some they’re representing the future, for others they might be seen as the eighth wonder of the world, but for people like me, they are an imminent danger to the world security as we know it today.
Intelligent TVs, intelligent smartphones, clever refrigerators who know you better than you do, smart cars who drive you from place to place – soon to be thinking on their own, social media that reads your mind 9for those too lazy to type anymore) – this is the future!
Or is it?!
Did you wonder what would happen if your fridge would stop working on its own will? Or what consequences would attract your PC starting to upload your internet history on Facebook? Or what would happen if one night your beloved new intelligent stove would decide to let the gas pressurizer lose?
These are the questions we need to ask long before computes start knowing we are.
A future of glass without security would render our existence in vain. This is the purpose of this study. To seek and find the most intimate weaknesses in today’s security algorithms and to find a way to overcome them.
Section 1.2. Tactic Objectives
Find the most intimate weaknesses inside today’s technology with respect to cyber security.
Find which impact the serialization of quantum computers would have upon the current generation of defense mechanisms and encryption systems.
Find the impact of Light Fidelity serialization upon current technology and it’s potential weaknesses.
Implement a system that is proven to be quantum resistant – from authentication or encryption point of view.
Planning and Field Research
Section 2.1. Resources Planning
At the basis of each successful research project lays a strong planning regarding all the resources involved in the actual research and implementation stages. Due to the fact that in this paper we meticulously approach a bleeding edge field of computer science and engineering, planning is the most important step made so far.
The main stages of resourcing in our vision, are as follows:
Setting out the objectives;
Setting out the steps / research methodology steps;
Setting out the budget;
Setting out the timeframe and chronology for the following steps – within a working framework (or not);
Setting out the main working methodology;
Laying out the conclusions;
First of all in the stages of planning we must clear out our objectives into straightforward ones, dispensed upon narrow fields of action. Developing a short methodology which we may recall later with respect to further detailing, is a must at this point. So, without further adieu, here are the general steps we appreciate as crucial:
Setting out the baseline of this thesis – already done
Laying out the main objectives – already done
Setting up the project methodology and chronology – the main feature of this chapter;
Conducting the first stage of the general field research – regarding to find the main problems with respect to the objectives traced earlier
Laying out the main problems found with respect to the initial goals
Updating the main objectives to the recent findings
Conducting the second stage of the general field research – regarding to find and debate solutions to the recently updated objectives
Setting up the main research conclusions
Researching upon and establishing a list of technologies to use in the development and implementation stages
Setting up a list of specifications regarding the system architecture – both hardware and software
Laying out the functionalities of the system
Detailed planning with respect to algorithmic, software and engineering stages
Implementation – setting up the pseudo-code, test code and production code in the unit-testing fashion
Testing the product and laying out the usage features and recommendations
Laying out the conclusions and further development directions
Filling in the main results
Making the abstract;
Section 2.2. BUDGET
With the exception of pure theoretical research papers, any other academic processes require an investment in the development stage. Our paper makes no exception to this rule, especially due to the fact that it is a Research and Development project.
Naturally, in this case, it is of the upmost importance that we schedule our budget and investment timeframe and limitations – tightly, so that in the end our desiderate is to finish the process with minimum losses.
First of all, the working methodology upon the budget, as deeply debated is as follows:
Establishing a general starting budget;
Laying out the deemed necessary;
Adjusting the budget accordingly to the previous step;
Reserving a compensation (emergency) budget;
Reserving the printing budget (for thesis publication);
If needed, at any point, a supplementation of the budget is permitted, but only after carefully analyzing the options.
PROPOSED BUDGET FOR THIS PROJECT:
500eur
(approx. 2.200 RON, National Bank of Romania – equivalent, at 16th of November 2016)
The deemed necessary was established as described in the Table 0.1 (below).
Table 0.1 – Deemed Necessary
ADJUSTED BUDGET FOR THIS PROJECT:
550eur
(approx. 2.500 RON, National Bank of Romania – equivalent, at 28th of November 2016)
EMERGENCY BUDGET FOR THIS PROJECT:
55eur
(approx. 250 RON, National Bank of Romania – equivalent, at 28th of November 2016)
Computed as of 10% from the Proposed Budget.
PUBLISHING BUDGET FOR THIS PROJECT:
27,5eur
(approx. 125 RON, National Bank of Romania – equivalent, at 28th of November 2016)
Computed as of 5% from the Proposed Budget.
GENERAL ADOPTED BUDGET FOR THIS PROJECT:
632,5eur ≈ 650eur
(approx. 3.000 RON, National Bank of Romania – equivalent, at 28th of November 2016)
Computed as the proposed, emergency and publishing budgets added together.
Section 2.3. Time ManaGement
One very important aspect in resources planning is time management. Time is the most important resource in researching and further developing a method / product – as its effects are very visible in the final result of each stage and subsequently the outcome of the entire project.
The effects of poor time management may be seen during the process, as even if the timeframe is set and there are deadlines at each step, one may or may not consider the delays that interfere in the process.
Our setup for the time resourcing, according to the objectives of this paper, are outlined as follows:
Choosing a work layout framework;
Setting out the timeframe according to the research outline described in the previous section;
Identifying the main concerns about the delays that may interfere in the process;
Reaching a scheme that includes and treats the concerns traced in the previous stage;
Setting out the final timeframe;
Our work framework of choice is Scrum. Scrum is “a framework within which people can address complex adaptive problems, while productively and creatively delivering products of the highest possible value.”.
The process lays its fundamentals upon a structured iterations of actions like: backlog planning (product specifications and functionalities planning), sprint planning (action process outline) – previously described, sprint term (the time interval upon which the research, development, implementation stage is focused on one aspect only), sprint review (the productivity of one sprint term) – described later on, at the end of each stage, and sprint retrospective (the outline feedback based upon the productivity index of each stage, objected in the final result).
The main concerns about this process lays under two great categories:
Research:
Finding inconclusive results;
Finding contrary results;
Finding information within unreliable sources – and checking upon it;
Finding a dead lead – and reiterate the whole research thread;
Other faults;
Development:
Discovering faults in the methodology;
Discovering faults in software integration;
Discovering faults in engineering methodology;
Discovering faults in engineering execution;
Unexpected results in testing – both hardware and software;
Other faults;
Finally, to solve the upcoming potential crisis, we adopt an extension plan to our end dates. This extension will be granted as 10% of the time (in days) assigned for one specific task. The transformation will be computed as follows:
If the task is based upon gathering or implementing procedures, the approximated value will be rounded up to the unit.
If the task is based upon writing procedures, the approximated value will be rounded down to the unit.
No extension should be granted for more than 6 days.
There will be tasks which do not benefit from extension.
In the Table 0.2 (below) you ca find how the working outline is set:
Table 0.2 – Final project timeframe
This schedule will be promptly applied in every stage described within it. One stage and subsequently its sub-stages may be regulated separately, but with respect to the interval from which they belong.
SECTION 2.4. Field Research
In this section we will further our analysis with respect to the established objectives, traced under Section 1.2. Tactic Objectives, 0 Introduction, page 10. This stage presents an extremely high importance, as based on the findings written here, we move forward towards deepening our theoretical reasoning and furthermore development.
The working methodology for this chapter is lying under the evolutive concrete-to-abstract reasoning.
SUBSECTION 2.4.1. DATA TRANSFER AND COMMUNICATION
Under this section we are approaching the data transmission models used today. The ultimate purpose to this extended reasoning resides in indentifying and classifying the communication models to ultimately decide which architecture suits our case best.
Network topology
Network topology defines ”how the systems are physically connected”. From the same journal (n.r. Santra, et al., 2013) we can classify them based upon their main security issue (see Table 0.1, below).
Table 0.1 – Physical Network Classification
In order to proceed to classification and prioritization we must first trace our objectives regarding the desired topology for this research project. But first things first – which type of general architecture do we want to apply to? Which king of architecture do we work with?
These questions are extremely relevant thus, based on the type of network architecture we choose to work with, we may add an extra layer of security to our project, or take one (subsequently adding a major vulnerability to our whole system).
Regarding the types of networks we deal with, it is of high priority not to approach just some types of architectures, thus narrowing our effectivness area, but instead generalize and expand our capacities – therefore we say that our concept must be suited to work with any kind of physical network architecture.
Moreover, do we want our project to work upon server-client architecture or peer-to-peer? To answer this question we move our attention upon a synthetic comparison between these two options.
Peer-to-peer architecture.
In the concept of not only Peer-to-Peer are considered to be distributed systems within which each node is equally in rank with all the others, but they refer „to a class of systems and applications that employ distributed resources to perform a critical function such as resources localization in a decentralized manner”.
From the same author we take the fact that P2P networks „implement a virtual overlay network over the underlying physical network” (see Figure 0.1 below).
Figure 0.1 – P2P Overlay Network
One major advantage according to the same author, relays in the fact that P2P networks are suited to be scalable. This means that we must consider this architecture for its efficiency and cost-effectiveness. Moreover, they are well suited for robustness and performance as well as high availability of files throughout the network – in case of unstructured P2P (the paper cited n.r. Amad, et al., 2012 considered keyword network search as a standard in analysing multiple variations of P2P).
In terms of choices regarding P2P architectures we have two options:
Unstuctured P2P networks – which have no restrictions in terms of data placement in the overlay topology;
Structured P2P networks – which contrary to the above, do impose multiple restrictions upon the placement in the overlay topology (subsequently making the resources rare in terms of accessibility) – see Table 0.2 below, in accordance with swedish scholar opinion .
By comparing the two mentioned above in accordance with their response time, we get the following Table 0.2.
Table 0.2 – P2P Networks comparison – memory usage, response time
Finally, in terms of security, according to , P2P networks are susceptible to a wide range of cybernetic attacks (see Table 0.3 below).
Table 0.3 – Susceptible attacks in P2P architecture
Client – Server architecture
Client-server architecture is „a system that performs both the functions of client and server so as to promote the sharing of information between them. It allows many users to have access to the same database at the same time, and the database will store much information”. See Figure 0.2 below.
Figure 0.2 – Client-Server Architecture
Table 0.3 – Types of C-S Architectures
In the Table 0.3 (above), we describe the types of client-server architectures with respect to the main components. It is to be retained that, having more functional components increases the security risk upon the network.
This aspect is enforced as a condition to every single system connected to the network. In order to further our understanding upon the risks, we pictured below a 3-tier architecture – Figure 0.3.
Figure 0.3 – 3-Tier C-S Architecture
As we can depict from above, in terms of security reasoning, the architecture described presents a number of disadvantages described in the Table 0.4 (below).
Table 0.4 – C-S. Security Disadvantages
COMMUNICATION MEDIA
Nowadays, transmitting data over large distances and with extreme accuracy is no longer an experimental effort that’s viewed more like a development investment, but instead it became a necessity.
Transcontinental data cables are the bloodlines of our modern computer society. Without them, no sentence would make it through the cold waters of Atlantic or Pacific oceans for instance. Speaking of cables, this is the oldest form of data transmission media.
Having shifted from this static perspective of data transmission towards conquering the air, was a major breakthrough in 1894 when Guglielmo Marconi first patented the principles of radio waves stated by Heinrich Hertz in 1888.
Moreover, the 21st century (late 20th century) brought the humanity the chance of exploring the deep black and unknown skies above the clouds, thus giving us the chance of taking the radio communication technology further and put it on our satellites.
Although these already known and much used technologies granted us the ability to communicate from the tiniest distance (by electromagnetic induction, Near Field Communication), to the greatest of them (space missions on Moon, Mars etc.), they pose two main problems:
The quality of the signal over distance – a factor that’s in direct connection to the type of communication media used (coaxial cable, fiber optics cable, radio waves);
The security of the communication channel regarding direct access, passive attacks, collisions, encryption-decryption flaws etc.
To fully understand the complexity of the security issues within the communications media it is necessary to approach the most relevant types of communication ports and study their flaws.
Table 0.5 – Outdated or Internal Ports
In the Table 0.5 (above) we present the short outline of our theoretical study upon outdated or internal ports regarding their security issues. Our findings are headed to 3 main directions:
Direct access – passive attack vulnerability;
Speed issues;
Configuration issues;
As stated above, the ports described in this stage are either internal ports (which are harder to access due to physical restrictions) or outdated ports (rarely used today).
Once we traced an outline of security issues upon old communication channels, we head towards analyzing the top 4 modern ones. This stage is for us to understand the security weaknesses of modern communication mediums with respect to the issues stated above.
Table 0.6 – Modern transmission mediums
The results of this research stage (summarized in Table 0.6) confirms us the fact that even though the modern transmission mediums are developed with security in mind and the encryption standard had developed towards high performance attack resistant algorithms, the main issues present in early and developing technology are still hanging around.
DATA TRANSMISSION PROTOCOLS
After analyzing the network topology and communications medium, it is of upmost importance to understand how data travels from one host to another. On this note we jump to analyze the OSI and TCP/IP models to understand where security breaches are most likely to occur.
The OSI Model was introduced by ISO (International Organization for Standardization) in 1984. The system summarizes sophisticated network phenomenon and cases on seven layers. The internal functions of a communication system is characterized and standardized by partitioning into abstraction layers. The model groups’ communication functions into seven logical layers.
In the conception of the OSI Model works upon 4 main principles:
A layer should be created where a different abstraction is needed.
The function of each layer should define internationally standardized protocols.
The layer boundaries should minimize the information flow across the interfaces.
The number of layers should be large enough that distinct functions need not be thrown together in the same layer out of necessity and small enough that the architecture does not become unwield.
The working OSI Model implements 7 layers described in Figure 0.14 (below).
Figure 0.14 – OSI Model Architecture
From am security point of view, the OSI Model is developed with encryption and decryption capabilities which makes it suitable for secure communication as well as a reliable architecture as stated in their paper. But remember, the OSI Model is just a theoretical model upon which the TCP/IP and UDP/IP Models are based on and further developed.
According to TCP is “the computer networking model and set of communications protocols used on the Internet and similar computer networks”.
The TCP/IP Model implements the OSI Architecture but simplified, so that in the end this new model has 4 layers instead of 7. Also, this new model groups a set of subprotocols under the TCP and UDP categories (like Telnet, FTP, SMTP) as well as server functionalities and specific protocols under the IGMP, ICMP categories (like DNS, RIP and SMNP), depending on the physical interface in case.
In the Figure 0.15 (below) the whole TCP/IP architecture is described.
Figure 0.15 – TCP/IP Model Scheme and Layers
Because TCP/IP was constructed to use packets, it is considered a packet-switching technology, the primary benefit of which is that data can be routed to a destination through any number of transmission points, making the network decentralized and less vulnerable to equipment failure.
As a comparison, networks that use circuit-switching technology like the telephone network must set up a dedicated connection between two points, which has a larger resource footprint and is easier to disrupt.
Every packet that is transmitted over a packet-switching network such as the Internet, the largest such network in existence is constructed of two major pieces: the packet header and the data. Within the header are several distinct pieces of information about the packet itself.
This information includes the version of the protocol being used (IPv4 – see Figure 0.16, pp. 27 or IPv6 – see Figure 0.17, pp.28), the length of the packet, the number of packets used to send the total data in question, the source and destination addresses, a checksum (used in error correction calculations), and the Time To Live (TTL) data, which defines how many devices the packet may be transmitted along, or hops, before the message is allowed to time out.
The data itself is divided into segments of length that can vary, generally in a range of 0 to 64 kB. Packets are transmitted over Ethernet networks, the most common physical type, within frames, or pre-set data blocks that have their own header and trailer information.
Since packets are the basic unit of network transmission, they fit into the standard model of networking model, at the network level, where network devices transmit and configure packet routing, or the paths a given packet will take to reach its destination.
After being formed at the network layer, packets are encoded into bits, then passed down to the data link layer. From there, the packets are inserted into frames, and then passed to the physical layer, which is the actual medium of transmission.
The process is reversed at the destination, with signal passing to the data link layer, pulling the data as bits from the frame, decoding into packets and passing the packets to the network layer for transmission.
Below
Above in Figure 0.18, you can see the standard TCP/IP Packet Header.
The last aspect of TCP/IP Protocl from a security point of view, is related to the risks that this system provides. Yes, being based upon the OSI Model, the TCP/IP Protocol is highly specialized in data protection, due to its architecture and stages of operation. But nevertheless it still has some falws.
Our research upon this fact, showed that the main cyber-minuses are as follows:
ROUTING (Figure 0.19)
With TCP/IP there is no sure way to know which route some packet send from source A to destination B takes. As an unfortunate effect, this fact opens a whole new world of attacks regarding intermediate points through which any packet travels.
ADDRESS SPOOFING (IP CLONING) (Figure 0.20)
Each IP packet includes both the IP address it is sent to as well as the address it orginates from. However, there is by default no verification that the source address is really the address of the host which created the message.
This allows any host to forge an IP packet with the source IP of any other host and claiming it is routing it from it. This problem is much more serious for UDP/IP than TCP/IP, because TCP/IP requires a SYN/ACK/SYN-ACK handshake to establish a connection, which only works when the source-IP of the packet which makes the connection is correct.
ARP SPOOFING (Figure 0.21)
This issue is related to Address Resolution Protocol which binds IP addresses to network interfaces. It affects the security of IP communication, because it allows one host to "steal" the IP address of another host so that any future IP packets get redirected. ARP spoofing usually only works in the same network segment. Also, an ARP poisioning attack can be detected and enterprise-grade network equipment is usually able to prevent it.
`
SYN-FLOODING (Figure 0.22)
This is a denial-of-service attack where the attacking host sends lots of SYN-packets (requests to open a connection) to the target host. However, it spoofs the source-IP with random addresses, so the server sends an ACK-packet (acceptance of the connection) to a host which never asked for it.
It will then wait for the SYN-ACK packet (acknowledgment of acceptance by the initiator) until timeout. This can bind a large amount of resources on the host and prevent it from accepting legitimate connections.
While this does not result in any data exposure or data manipulation, it is still a frequently used method to temporarily prevent users from reaching a certain host.
SEQUENCE NUMBER GUESSING
When two hosts have established a connection, each packet they exchange is numbered. When an attacker knows that two hosts are communicating with each other and they can guess the next sequence-number, they can spoof such a packet to inject forged data into the communication.
INTERNET OF THINGS
According to Internet of Things is a multitude of “scenarios where network connectivity and computing capability extends to objects, sensors and everyday items not normally considered computers, allowing these devices to generate, exchange and consume data with minimal human intervention”.
Figure 0.24 – IoT
In the views of the same author, IoT raises a series of problems such as:
Many Internet of Things devices, such as sensors and consumer items, are designed to be deployed at a massive scale that is orders of magnitude beyond that of traditional Internet-connected devices.
As a result, the potential quantity of interconnected links between these devices is unprecedented. Further, many of these devices will be able to establish links and communicate with other devices on their own in an unpredictable and dynamic fashion. Therefore, existing tools, methods, and strategies associated with IoT security may need new consideration.
Many IoT deployments will consist of collections of identical or near identical devices. This homogeneity magnifies the potential impact of any single security vulnerability by the sheer number of devices that all have the same characteristics.
For example, a communication protocol vulnerability of one company’s brand of Internet-enabled light bulbs might extend to every make and model of device that uses that same protocol or which shares key design or manufacturing characteristics.
Many Internet of Things devices will be deployed with an anticipated service life many years longer than is typically associated with high-tech equipment. Further, these devices might be deployed in circumstances that make it difficult or impossible to reconfigure or upgrade them; or these devices might outlive the company that created them, leaving orphaned devices with no means of long-term support.
These scenarios illustrate that security mechanisms that are adequate at deployment might not be adequate for the full lifespan of the device as security threats evolve. As such, this may create vulnerabilities that could persist for a long time.
This is in contrast to the paradigm of traditional computer systems that are normally upgraded with operating system software updates throughout the life of the computer to address security threats. The long-term support and management of IoT devices is a significant security challenge.
Many IoT devices are intentionally designed without any ability to be upgraded, or the upgrade process is cumbersome or impractical. For example, consider the 2015 Fiat Chrysler recall of 1.4 million vehicles to fix a vulnerability that allowed an attacker to wirelessly hack into the vehicle.
These cars must be taken to a Fiat Chrysler dealer for a manual upgrade, or the owner must perform the upgrade themselves with a USB key. The reality is that a high percentage of these autos probably will not be upgraded because the upgrade process presents an inconvenience for owners, leaving them perpetually vulnerable to cybersecurity threats, especially when the automobile appears to be performing well otherwise.
Many IoT devices operate in a manner where the user has little or no real visibility into the internal workings of the device or the precise data streams they produce. This creates a security vulnerability when a user believes an IoT device is performing certain functions, when in reality it might be performing unwanted functions or collecting more data than the user intends.
The device’s functions also could change without notice when the manufacturer provides an update, leaving the user vulnerable to whatever changes the manufacturer makes.
Some IoT devices are likely to be deployed in places where physical security is difficult or impossible to achieve. Attackers may have direct physical access to IoT devices. Anti-tamper features and other design innovations will need to be considered to ensure security.
Some IoT devices, like many environmental sensors, are designed to be unobtrusively embedded in the environment, where a user does not actively notice the device nor monitor its operating status.
Additionally, devices may have no clear way to alert the user when a security problem arises, making it difficult for a user to know that a security breach of an IoT device has occurred.
A security breach might persist for a long time before being noticed and corrected if correction or mitigation is even possible or practical. Similarly, the user might not be aware that a sensor exists in her surroundings, potentially allowing a security breach to persist for long periods without detection.
Early models of Internet of Things assume IoT will be the product of large private and/or public technology enterprises, but in the future “Build Your own Internet of Things” (BYIoT) might become more commonplace as exemplified by the growing Arduino and Raspberry Pi60 developer communities. These may or may not apply industry best practice security standards.
CYBER SECURITY
In this section we will deal with facts regarding the integrity of data and network systems. The methodology for this section consists in:
Gathering cyber security data from studies and research campaigns ran between 2013-2017;
Analyzing the main authentication methods and algorithms;
Analyzing the main cryptology methods and algorithms;
Closing with a final list of problems and potential solutions;
Without further adieu, we jump right into the facts regarding cyber security.
A study ran in 2015 by UBM Tech, gathered data from industry regarding the top presumed threats by company CEO, Manager and Security Team. The results are shown in
Figure 0.25 – UBM Tech 2015 – potential threats survey
A research campaign launched by Symantec in 2014 rendered the following results with this opening statement: „In 2013 much attention was focused on cyber-espionage, threats to privacy and the acts of malicious insiders. However the end of 2013 provided a painful reminder that cybercrime remains prevalent and that damaging threats from cybercriminals continue to loom over businesses and consumers. Eight breaches in 2013 each exposed greater than 10 million identities, targeted attacks increased and end-user attitudes towards social media and mobile devices resulted in wild scams and laid a foundation for major problems for endusers and businesses as these devices come to dominate our lives”.
Another research campaign launched by Symantec in 2016 rendered the following results with this opening statement: „In 2013 much attention was focused on cyber-espionage, threats to privacy and
SECURE COMMUNICATIONS
In order to understand how data can be transmitted securely, we first need to understand some basic principles. According to information security theory there are 3 main steps in securing a communication channel:
Authentication:
Authentication is used by a server when the server needs to know exactly who is accessing their information or site;
Authentication is used by a client when the client needs to know that the server is system it claims to be;
In authentication, the user or computer has to prove its identity to the server or client;
Usually, authentication by a server entails the use of a user name and password. Other ways to authenticate can be through cards, retina scans, voice recognition, and fingerprints;
Authentication by a client usually involves the server giving a certificate to the client in which a trusted third party such as Verisign or Thawte states that the server belongs to the entity (such as a bank) that the client expects it to;
Authentication does not determine what tasks the individual can do or what files the individual can see. Authentication merely identifies and verifies who the person or system is;
Authorization:
Authorization is a process by which a server determines if the client has permission to use a resource or access a file;
Authorization is usually coupled with authentication so that the server has some concept of who the client is that is requesting access;
The type of authentication required for authorization may vary; passwords may be required in some cases but not in others;
In some cases, there is no authorization; any user may be use a resource or access a file simply by asking for it. Most of the web pages on the Internet require no authentication or authorization;
Encryption:
Encryption involves the process of transforming data so that it is unreadable by anyone who does not have a decryption key;
The Secure Shell (SSH) and Socket Layer (SSL) protocols are usually used in encryption processes. The SSL drives the secure part of “https://” sites used in e-commerce sites (like E-Bay and Amazon.com.);
All data in SSL transactions is encrypted between the client (browser) and the server (web server) before the data is transferred between the two;
All data in SSH sessions is encrypted between the client and the server when communicating at the shell;
By encrypting the data exchanged between the client and server information like social security numbers, credit card numbers, and home addresses can be sent over the Internet with less risk of being intercepted during transit;
AUTHENTICATION TYPES
Since CISCO approaches wireless authentication in some approximate manner as wired connection authentication, we decided to follow this route. Below, there are the most relevant (in our conception) authentication methods proposed by CISCO. This section will approach only a brief introduction in what these algorithms are and how they work, further discussion and outcome of the research stage being redirected in the conclusions section.
Open Authentication
CISCO states that open authentication allows any device to authenticate and then attempt to communicate with the access point. Using open authentication, any wireless device can authenticate with the access point, but the device can communicate only if its Wired Equivalent Privacy (WEP) keys match the access point’s WEP keys. Devices that are not using WEP do not attempt to authenticate with an access point that is using WEP. Open authentication does not rely on a RADIUS server on your network. This principle can be seen below in Figure 0.37.
Figure 0.37 – CISCO Open Authentication
Shared Key Authentication
During shared key authentication, the access point sends an unencrypted challenge text string to any device that is attempting to communicate with the access point. The device that is requesting authentication encrypts the challenge text and sends it back to the access point. If the challenge text is encrypted correctly, the access point allows the requesting device to authenticate – as proven by CISCO – Figure 0.38.
Figure 0.38 – CISCO Shared Authentication
Extensible Authentication Protocol
CISCO sets EAP authentication as the type that provides the highest level of security for your wireless network. By using the Extensible Authentication Protocol (EAP) to interact with an EAP-compatible RADIUS server, the access point helps a wireless client device and the RADIUS server to perform mutual authentication and derive a dynamic unicast WEP key. The RADIUS server sends the WEP key to the access point, which uses the key for all unicast data signals that the server sends to or receives from the client. The access point also encrypts its broadcast WEP key (which is entered in the access point’s WEP key slot 1) with the client’s unicast key and sends it to the client – Figure 0.39.
Figure 0.39 – CISCO EAP Authentication
MAC Authentication
The access point relays the wireless client device’s MAC address to a RADIUS server on your network, and the server checks the address against a list of allowed MAC addresses. Because intruders can create counterfeit MAC addresses, MAC-based authentication is less secure than EAP authentication. However, MAC-based authentication provides an alternate authentication method for client devices that do not have EAP capability – in CISCO’s concept – Figure 0.40.
Figure 0.40 – CISCO MAC Authentication
OTHER TYPES OF AUTHENTICATION
According to the , “Cryptography is one of the most important fields in computer security. It is a method of transferring private information and data through open network communication, so only the receiver who has the secret key can read the encrypted messages which might be documents, phone conversations, images or other form of data”. In the authors concept, cryptography can be used to ensure a much secure authentication.
They propose five general methods of authentication, described in Table 0.7 (below).
Table 0.7 – Other types of Authentication
CRYPTOLOGY ALGORITHMS
After a peer had been authenticated and it’s considered to be trustworthy, one problem occurs regarding the secrecy of the messages (no matter of their nature) send between that peer and another.
Ancient cryptology used common logic to hide the real message under a cyphertext (usually consisting of the same vocabulary as the original message). As the time went by, alongside encryption-decryption methods developed alternative methods to decipher the code, using one of two ways: brute force and reverse engineering.
Once the computer was invented, one of the fundamental tasks it was given, was to automatize the cryptographic task. It was then, when old encryption techniques became outdated and humanity began a new journey to seek alternative, more advanced ways to encrypt its secrets.
Moreover, in comparison with the old methods, computer-driven algorithms were based on a major advantage – the possibility of millions of iterations each second. As a conclusion, in early days, the algorithms differentiated themselves by brute-force-computing the cyphertext faster than any human was capable.
However, as time flew by once again, early computer algorithms became obsolete, by the means of two factors:
Apparition of new alternative ways to decipher the code – most often these new algorithms were better than the original ones, rendering them useless;
Over-usage – in early modern cryptology the algorithms, based on logic and mathematics, were reusable to some extent. Since that extent was due, the algorithms were rendered useless too by the mathematics on which they were based upon;
So, we are looking forward to analyze the cryptology algorithms that are highly used today (top 5 of them) as we consider this way to be the most productive of all – since our scope is to refer to their weaknesses in terms of providing the best security in secrecy, in terms of a continuously evolving technological context.
After long hours of research we came forward with the next 4 algorithms:
Data Encryption Standard (Figure 0.46)/ Triple DES;
Advanced Encryption Standard (Figure 0.47);
Rivest-Shamir-Adleman Encryption Protocol (Figure 0.48);
Hash Key;
In the concept of the first 3 algorithms are used to encode and decode messages, as the last one is used only to ensure the integrity of the transmitted message.
To synthetize the first 3 algorithms, the comparison can be found below in Table 0.8, pp.44.
Table 0.8 – DES-AES-RSA Comparison
According to , there is an evolutionary point of view to be taken into consideration when comparing SHA-1, SHA-2 and SHA-3 (with all their flavors). Although SHA-1 is outdated due to more inflicting code collisions, SHA-2 is now on the verge (with collisions begining to register under its name also), SHA-3 will be the next standard.
Our research upon this comparison led to the following outcome, described in Table 0.9 below.
Table 0.9 – SHA-2 vs SHA-3 Comparison
The author (n.r. ) performed a series of tests regarding the speed of encryption upon the algorithms described above. This can be seen in the figures Figure 0.49, Figure 0.50 and Figure 0.51 below. The tests considered a different sized input buffer from 1 Kb to 256 Mb.
TYPES OF ATTACKS
By no means this is the shortest research stage. From information security theory we know the fact that attacks are grouped by 3 main categories:
Passive attacks – attacks in which the hacker follows to listen to the information and decypher it. The scope of this attack is to retrieve information from cyphertext;
Active attacks – attacks in which the hacker follows to either alter the information send, to replace the information or to disturb the communication process (either by interrupting it or by ceasing it completely);
QUANTUM COMPUTERS AND THEIR IMPACT ON CYBERSECURITY
Having deepened our analysis this far into the cyber security area, and still having in mind the opening statement of the previous research stage, it is of most natural sense to challenge the fact that soon quantum computers will change the international cryptology community one again (as the first generations of computers did).
But first of all what is exactly a quantum computer and how can it change the future of cryptology? Well, according to , “A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or any quantum superposition qubit states. A pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. In general, a quantum computer with of those two n qubits can be in an arbitrary superposition of up to 2n different states simultaneously. This compares to a normal computer that can only be in one of these 2n states at any one time”.
In accordance with the same author, a qubit (Figure 0.53) is a state (of a single particle) that can be 0, 1 or anything in between (Figure 0.52 – according to Schrodinger’s superposition theory). The mechanism behind a qubit implements a principle of quantum mechanics called superposition. This principle states that any given particle in the right conditions (usually thermal condition of absolute 0 degrees Kelvin) has any orientation between 0 and 1 (interpreted in degrees, radians, nm etc.).
Having known that a quantum computer can function upon basically any material, of course it’s at minds ease to figure out that there are multiple implementations of Quantum Turing Machines out there. The author (n.r. ) depicted these implementations as described (summarized) in the Table 0.10 below.
Table 0.10 – Types of Quantum Computers
Having that much possibilities, a performant quantum computer does not exist as a theory anymore, but as a reality nowadays. Moreover, D-Wave produced the first commercially available quantum computer (although the price is too high to be considered available yet) – and partnered up with Google in harnessing this novel technology’s power.
The final question after understanding how big of a deal quantum computers are in terms of computational power is – Will they affect the cryptology community and science?
Most likely – YES! There are multiple research project on the run regarding current cryptic algorithms and their interaction with quantum attacks – and the outcome is not encouraging at all (due to the fact that most of the algorithms tested so far, failed the quantum-resistance standard).
RESEARCH CONCLUSIONS
Finally we end our research stage and we forward our attention upon outlining the main issues within the cyber security spectrum.
Regarding network architecture, we may conclude that no matter which architecture one household, one institution or one government choose, there will be attack vulnerabilities that exploits the network scalability (most secure network is peer-to-peer with two nodes, most insecure network is client-server with multiple subnets) and the communication medium and process.
In terms of communication medium, we decide to focus upon LAN and Wireless, due to the fact that nowadays they are the most used mediums for data transmission. Also they are the most vulnerable ones too.
Regarding the transmission protocol we conclude that, no matter how prepared and improved it may be, the TCP/IP and UDP/IP protocols are not secure enough (although their layers act as security layers – in reality they are most vulnerable due to the weaknesses in data flow through them).
Regarding authentication protocols, we concluded that they are not secure enough as they depend to a certain number of factors (usually logical and static factors as IP, MAC address etc.) and however they implement too few variables in grating an authorization to one peer.
Regarding the cryptology protocols, we conclude on the same note as with the authentication protocols described above.
Finally, having all these conclusions put in the light of a continuously evolving technology especially with respect to quantum computing, it is at ease of mind to conclude that once serialized, quantum computers will have a major impact on today’s so much used algorithms. Furthermore, like in the early stages of IT era, most of them will be rendered useless.
Unless further research and development is conducted into obtaining at least one valid prototype, the future is not only insecure for us, but with the Internet of Things evolving on such a quick pace, there will be no secure place in the modern civilized world to call safe.
ADAPTED PAPER OBJECTIVES
To further research the technologies needed for a valid quantum-resistant recipe.
This recipe may include hardware as well as software solutions.
To plan the specifications of this ensemble of technologies.
To plan the functionalities of this ensemble of technologies.
If necessary – to plan and execute the engineering side of the ensemble.
If necessary – to plan and execute the software side of the ensemble.
To test the ensemble for quantum resistance (in theory).
To test the prototype for current attacks.
To test the prototype for integration with current technologies.
To run experiments upon the prototype and note the main results.
To trace further research and development directions involving the prototype.
Technologies
After setting the conclusions on the last chapter, I now start fresh with a new challenge – that of finding out the perfect recipe for my project. This step is crucial for the development part of the paper (and we already established as a goal to prototype a homogenous security system).
It is most obvious after the research stage, that in order to succeed in making a quantum-resistant system, we need to fight the potential quantum-computing attacker with it’s weapons. So, it is most obvious that we need to use quantum technology and quantum mechanics in order to stand a chance in this conquest.
Now, the next thing to ask is this: which particle we choose to be our quantum basis, since almost every particle in the universe can be manipulated to obtain the desired quantum effects? How am I going to choose a particle? Is it going to be a molecule, an atom, an electron, a boson, a quark, a wave?
The first thing that pops to the mind is cost. So, without further adieu, these are the criteria I imposed in gradually restraining the field of choice:
How much does it cost to obtain?
How much does it cost to keep it stable?
How much does it cost to induce quantum mechanisms upon it?
How much does it cost to maintain the system up and running?
How much does it cost to replace the particle basis when the current batch is due date?
In order to help myself find a basis of things to start with, I recalled the research paper regarding 3 fundamental components:
Atoms;
Electromagnetic waves;
Ions;
ATOMS (Table 0.1 below)
Table 0.1 – Costs of Atom-Quantum-Encryption
ELECTROMAGNETIC WAVES / LIGHT (Table 0.2)
Table 0.2 – Costs of Light-Quantum-Encryption
IONS (Table 0.3)
Table 0.3 – Costs of Ion-Quantum-Encryption
As a conclusion, the most relevant particle to use in my project is light.
Section 3.1. Objectives
Based upon the conclusion above, the development objectives are as follows:
Design a hardware prototype that implements the quantic properties of light to encrypt / decrypt or authenticate a message / peer.
Design a software architecture which controls the hardware prototype.
Design a software architecture which harnesses the power of quantic light encryption to encrypt and decrypt messages.
Find a way to transmit these messages in order to avoid exposure or lose efficiency without cables.
Test and integrate the modules together and with widely spread technologies in order to ensure continuity and cross platform efficiency and adaptability.
Now, based upon the targets set above, here are the objectives in finding the right technologies to implement with:
Cheap, scalable and reliable embedded systems.
Cheap, scalable and reliable quantic light manipulation devices.
Cheap and reliable light sources.
Efficient, versatile and highly used programming language for both server and client side.
Efficient, accurate and easy to use frameworks to implement the modules.
Section 3.2. Electromagnetic Data Transmission
VISUAL LIGHT COMMUNICATIONS
Furthermore we analyze five definitions upon Light Fidelity, definitions that we consider to be most relevant to this research paper.
”Li-Fi is transmission of data through illumination by taking the fiber out of fiber optics by sending data through an LED light bulb […] that varies in intensity faster than the human eye can follow” .
”Light Fidelity is a light-based Wi-Fi, which uses light waves instead of radio waves for data transmission” .
“Lifi uses visible light instead of Gigahertz radio waves for data transfer which makes it fast and cheap mode of wireless communication. The idea of Li-Fi was introduced by a German physicist, Harald Hass, which he also referred to as ―data through illumination” .
“In principle, LiFi also relies on electromagnetic radiation for information transmission”.
„Li ⁃ Fi is an emerging high ⁃ speed, low ⁃ cost solution to the scarcity of the radio frequency (RF) spectrum, therefore it is ex⁃ pected to be realized using the widely deployed off ⁃the ⁃ shelf optoelectronic LEDs. Due to the mass production of these inex⁃ pensive devices, they lack accurate characterizations. In Li⁃Fi, light is modulated on the subtle changes of the light intensity, therefore, the communication link would be affected by the non⁃ linearity of the voltage⁃luminance characteristic”.
In Figure 0.1 we can observe the general principle described by the definitons given above. However, this technology is not stripped by problems, as describes it as follows:
High installation charges of Visible Light Communication (VLC) devices.
Interference from external light sources like sun-light, normal bulbs, opaque materials.
Light cannot penetrate through objects such as walls and the exact explanation is described below:
If there is no obstacle between LED lamp and receiver, the data is received normally on the receiver end.
But if there is some kind of obstacle like wall in between the LED Lamp and receiver then there is loss of data which is explained in the Figure 0.1 below.
Figure 0.2 – LiFi Obstacle Diagram
In this context, the problem raised above, does act in our favor, regarding the secrecy of messages transmitted over LiFi. However, this issue is under development, and it will be soon overcomed, meaning the LiFi encryption will become relevant.
Light Fidelity is a concept that in this project will be implemented as a communication medium as well as an encryption medium. The basis principle generates the same effects in both situations, thus making it suitable for quantum manipulation.
QUANTUM KEY DISTRIBUTION
The principle of modulating light to store data is not new nor is it unique. The modulation can take place within the light intensity of polarization. We saw earlier the applications of intensity modulations (LiFi).
But what happens when we modulate the light polarization (orientation in space)? Well, depending on the light source (which in this case must be a non-conventional one, e.g. laser diode), and the modulator unit (fixed e.g. crystal, or mobile e.g. polarization unit) we recall two main protocols used in data encryption.
According to “Quantum Cryptography (QC) provides unconditional security relying on the quantum physics law. Such a security is called information theoretic security because it is proved by Shannon’s theory of information”.
Modern quantum cryptography knows two major protocols:
BB84 Protocol
E91 Protocol
BB84 Protocol
„BB84 was the first studied and practical implemented QKD physical layer protocol. It was elaborated by Charles Bennet and Gilles Brassard in 1984 in their article. It is surely the most famous and most realized quantum cryptography protocol. This scheme uses the transmission of single polarized photons (as the quantum states). The polarizations of the photons are four, and are grouped together in two different non orthogonal basis” .
The functioning scheme can be viewed below at Figure 0.3.
Figure 0.3 – BB84 Protocol
The polarization and validation table for BB84 protocol can be viewed below in Table 0.4.
Table 0.4 – BB84 Polarization and Validation
E91 Protocol
In the concept of – the Ekert scheme uses entangled pairs of photons . These can be created by created by Client A, by Client B, or by some source separate from both of them, including eavesdropper Eve. The photons are distributed so that Client A and Client B each up with one photon from each pair.
The Scheme relies on two properties of entanglement. First the entangled states are perfectly correlated in the sense that if Client A and Client B both measure whether their particles have vertical or horizontal polarizations, they will always get the same answer with 100% probability.
The same is true if they both measure any other pair of complementary (orthogonal) polarization However the particular results are completely random, it is impossible for Client A to predict if and Client B will get vertical polarization or horizontal polarization. Second any attempt at eavesdropping by Eve will destroy these correlations in a way that Client A and Client B can detect.
A typical physical set-up is shown in Figure 0.4, using active polarization rotators (PR), polarizing beam-splitters (PBS) and avalanche photodiodes (APD).
Figure 0.4 – E91 Protocol Scheme
Section 3.3. SOFTWARE
SERVER APPLICATION – PROTOTYPE CONTROL UNIT
In order to control the prototype unit, a software control unit will be needed. The programming languages of choice will be:
Python;
NodeJs;
Some of the advantages of using Python are:
It is scalable and reliable – the syntax is easy and intuitive;
It has a managed memory – the garbage collector comes in handy when declaring variables and later dumping them;
It is a dynamically typed language – at variable decalration one does not need to declare the variable type as it is automatically assigned at the first operation. This attribute may change from iteration to iteration;
It has a wide range of network libraries – which allows us to make the network integration easier (e.g. sockets);
It has a wide range of sicentific libraries – python is currenly the most used programming language within the academic and scientific communities. The libraries are manageable with Anaconda;
It integrates with TensorFlow, theano and Keras – artificial intelligence and machine learning frameworks, giving a better yield than C++/TensorFlow for example;
It integrates with Django and NodeJs well;
Some of the advantages of using NodeJs are:
It supports a wide range o programming languages as the computing core – Python amongst them;
It has a wide range of features – expressjs included – a framework for webserver deployment;
It integrates well with Arduino C++ – needed for embedded applications;
It is a much faster, much comprehensive, much intuitive, much efficient framework than Django;
SERVER APPLICATION – COGNITIVE COMPUTING UNIT
Besides Python and NodeJs, the cognitive computing unit that acts on the server, is implemented using:
TensorFlow;
Theano;
Keras;
These 3 frameworks are deep neural network frameworks that learns to perform a task through positive reinforcement and works through layers of data (nodes) to help it determine the correct outcome.
Having integrated built-in functions to create, model, use and outcome neurons, they come in handy when dealing with hard principles like LSTM or GANN.
EMBEDDED APPLICATION
The programming language of choice when it comes to embedded applications is not so much a choice as it is a restriction imposed by the programmable circuits manufacturer ARDUINO. The circuits are programmed in C++ – an embedded version of it.
The implementation in embedded is quite different than usual C++ implementation as it is to be of high care the variable memory allocation as the circuit board does not have many RAM nor does it have as much ROM as a PC program would take.
CLIENT APPLICATION
The client application is implemented using the same technologies as the server application – thus reducing the maintainability index of the system.
Specifications
4.1. System Description
In this chapter I will discuss the planning and implementation aspects from the development stages of my cryptic prototype. The relevance of this step resides in the directions that it traces for later analysis and implementation.
From the hardware point of view, the proposed system (Figure 0.1) is configured upon client-server network architecture and it contains:
The authentication-cryptology server – this server can be installed onto any PC with enough resources to process the data. The server is a dedicated PC that can serve from a small household to a large government facility.
The WiFi router – which given the annual surveys of 2017 is present in the most households having minimum a PC.
The Quantum Key Distribution unit (QUALIFY – Quantum Authentication based on Light Fidelity) – used in this project to ensure the authentication and authorization processes in the cyber security prototype.
The LiFi actuator – constructed upon a household power extender cable – to serve as many light sources as possible without sacrificing the light source integrity – thus granting reusability and scalability altogether.
The LED lamps – they might be any light sources with an LED bulb plugged in.
The client – mobile peers that have a light sensor incorporated. Today most of the laptops, mobile phones (smartphones) and IoT devices (smart watches, smart TVs, smart appliances etc.) have a light sensor incorporated.
Two 220V outlets.
Internet connection – optional.
From the software point of view, the application is divided into multiple cores with individual computation capabilities and roles assigned (Figure 0.2), such as:
Visual Light Communications
The cognitive core – containing the neural cryptology module, the spatial vectorization module, the intrusion detection module and the action module.
The Tree Parity Machine – the neural module for password synchronization.
The Qualify Administrator – Server Module containing:
The 1st and 2nd photon encryption modules and the 1st to 4th photo-detection modules.
The LAN communication module.
The Actuator module containing:
The high voltage relay control module.
The LAN communication module.
The WiFi control module for communication.
4.2. FUNCTIONALITIES
In this section I will discuss the proposed functionalities of the system presented beforehand. This stands as an outline of further implementation.
HARDWARE
The QUALIFY system takes the input buffer for authentication and generates the light signals.
The signal is treated with quantic dielectric and quantum-metal reflectig materials, ensuring the photonic superposition and asynchrony.
The QUALIFY system sends the signal two ways:
To the LiFi actuator where the signal is transformed back into light impulses and recepted by the client-device.
To the Server, where the buffer waits at eth0 port to be combined with the response signal from the client-device.
The response is gathered back into the server, processed and forwarded to the QUALIFY system for validation.
Here, the system puts the signal through a quantic interferometer, seeking for discrete light quantae.
If the authentication is validated, the system proceeds to synchronize the TPM.
All the communications are delivered though LAN cables to adn from the server.
SOFTWARE
The Client requires authentication/
The server administrator opens up the required ports, initializes the folders and the cognitive core as well as the QUALIFY Administrator.
The Server sends the authentication buffer to the QUALIFY system.
Once modulated, the signal is used to activate the synchronization process of Tree Parity Machine between server and Client.
Once synchronized, the server activates the location core and reactivates the authorization process with the QUALIFY system.
The messages are encrypted with neural networks, taking in the modulated signal and position of the client as well as the ip addresses.
If any attack is registered by the QUALIFY system, the server shuts down the communication channels, changes the IP addresses, ports, weights and encryption base then restarts the authentication process.
If this is the case, all the clients send a continuous request for authentication until the applications are up and running.
Detailed Planning and Development
5.1. Engineering
PROTOTYPE STAND
In order to put the research conclusions into practice, we need to ensemble a demonstrative and experimental stand to test the concepts of light fidelity.
The materials chosen for this stage are:
Transparent plexiglass;
Blue plexiglass;
Threaded rods;
Nuts with stopper;
Screw extensions;
Hinges;
Steel corners;
Hexagonal screws;
Double adhesive tape;
Electric insulation tape;
The instruments needed to process the material are:
Workbench;
Circular saw with water feeder;
Drilling machine;
Drilling rods (3mm, 6mm, 10mm);
Buffer tape;
Electronic micro-screwdriver;
Scissors;
Ruler;
Angle measuring device;
Leveler;
Blowtorch;
Shades (for the windows);
The stand was custom projected to fit the following setup:
A base tier for the circuits;
A top tier for the optic unit;
A lid over the top unit (to ensure light protection due to photodiodes sensitivity);
A side tri-folding panel to protect the unit from dust as well as ensuring a support for the lid (when open) and quick access to the components (both on the lower and the upper rack);
4 raising metal legs;
The prototype stand was build after a specific set of instructions and dimensions:
Bottom Tier – Figure 0.2, with circuit topology described in Figure 0.3
Top Tier – Figure 0.1 with optic mirrors topology described in Figure 0.4
Top lid (50x40x10 cm) made from opaque blue plexiglass and fixed with hinges by the top tier back side.
THE OPTICAL UNIT
The optic unit on this prototype is quite special since it applies quantum mechanics properties upon photons emitted from the laser source.
To fully understand the phenomena that happens in the optic unit, we must first understand some of the properties of photons.
But what is a photon?
A photon is an elementary particle, the quantum of the electromagnetic field including electromagnetic radiation such as light, and the force carrier for the electromagnetic force (even when static via virtual photons). The photon has zero rest mass and always moves at the speed of light within a vacuum.
A photon has two possible polarization states. In the momentum representation of the photon, which is preferred in quantum field theory, a photon is described by its wave vector, which determines its wavelength λ and its direction of propagation. A photon's wave vector may not be zero and can be represented either as a spatial 3-vector or as a (relativistic) four-vector; in the latter case it belongs to the light cone (Figure 0.5).
Different signs of the four-vector denote different circular polarizations, but in the 3-vector representation one should account for the polarization state separately; it actually is a spin quantum number. In both cases the space of possible wave vectors is three-dimensional.
The photon is the gauge boson for electromagnetism and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavor quantum numbers) are zero. Also, the photon does not obey the Pauli exclusion principle.
Photons, like all quantum objects, exhibit wave-like and particle-like properties. Their dual wave–particle nature can be difficult to visualize. The photon displays clearly wave-like phenomena such as diffraction and interference on the length scale of its wavelength.
For example, a single photon passing through a double-slit experiment exhibits interference phenomena but only if no measure was made at the slit. A single photon passing through a double-slit experiment lands on the screen with a probability distribution given by its interference pattern determined by Maxwell's equations. However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; it does not spread out as it propagates, nor does it divide when it encounters a beam splitter.
Rather, the photon seems to be a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, systems much smaller than its wavelength (in my case 650nm – see Figure 0.6 above), such as an atomic nucleus (≈10−15 m across) or even the point-like electron.
Nevertheless, the photon is not a point-like particle whose trajectory is shaped probabilistically by the electromagnetic field, as conceived by Einstein and others; that hypothesis was also refuted by the photon-correlation experiments cited above.
According to our present understanding, the electromagnetic field itself is produced by photons, which in turn result from a local gauge symmetry and the laws of quantum field theory.
A key element of quantum mechanics is Heisenberg's uncertainty principle, which forbids the simultaneous measurement of the position and momentum of a particle along the same direction. Remarkably, the uncertainty principle for charged, material particles requires the quantization of light into photons, and even the frequency dependence of the photon's energy and momentum.
This uncertainty principle in concurrence with Schrodinger’s time-dependent equation (Figure 0.7) is the engine underneath the quantum-resistant authentication.
To understand the physical impact of a photon first we take the fundamental formula of general relativity:
with respect to light becames
By deriving the fundamental formula of relativity with respect to mass we obtain the following:
By deriving the fundamental formula of relativity with respect to velocity (which in this case we presume not to equal the speed of light), we obtain the following:
The result of this theoretical approach, may be seen in the Figure 0.8 below.
Figure 0.8 – Photon Mass
As a conclusion, photons do have mass when moving, hence the measurable photo-impact – a property we will later use.
From the research paper we know that photon beams can be split by dielectric materials (such as glass) to enter superposition. In our case, (n.r. Figure 0.4) there are two types of dielectric mirrors involved:
The transparent mirror – acting as a beam splitter. From a quantic point of view once a photon encounters a dielectric material, it cannot pass both reflective and refractive ways through it, so it must choose which way it goes.
Our setup has 8 such mirrors – so the quantic factor is 16 – 8 on each side generating a cuantum of posibilities equal to 20.922.789.888.000 posibilities at each iteration (each photon emision).
The reflective mirror. This mirror has special properties since it is made with pure backing silver. From a quantic point of view, upon reflection, the amount of energy lost by the beam is measured in one of two ways:
The photons are deviated by the strong nuclear force and thus losing speed;
The deviated photons with an angle of incidence small enough, will lose so much power that they will eventually dissappear (remembering the principle demonstrated just before);
The reflection mirror is made by a specific protocol to ensure the quality of silver backing as well as the thickness (in microns) of it.
Previously to making the mirror solution, we have to clean the glass slides from oils and debree. So that, we submerge the slides into 98% izopsopilic acohol and wait for 24 hours. After that, the slides are taken out of the solution and rinsed with oxygenated water, then put under a glass bell.
Next, according to the material available at http://www.chymist.com/silver%20flask.pdf, the process of making silver mirror begins with the making of Silver Nitrate (to ensure the quality and concentration desired).
Nitric acid and pure silver are mixed and let at room temperature to react. Once the solution cleared up, it is let to dry for 72 hours. The reaction is set below:
Ag + HNO3 = AgNO3 + H2O
Once the silver nitrate is ready, we mix previously chilled ammonia with 0.1g of dextrose and a 25% silver nitrate solution (100g water to 25g silver nitrate crystals).
The reaction (below) produces atomic silver that we want to deposit on out glass lenses.
The solution is highly reactive, so to make sure the coating is uniform, we spin the plate upon which the slides are resting and add the solution dropwise. This will create a perfectly uniform layer of silver (to make this more efficient I used step motors to spin the microplate at 780 Rpm).
Once formed, the silver backing is coated with polypropylene paint to ensure weather resistance. The final product is highly reflective and electrically insulated (to prevent energy loss by grounding the metal layer).
All the slides are then dried, rinsed with oxygenated water and mounted on steel corners and fixed with double adhesive tape to ensure further insulation from vibrations.
Furthermore, the distances are marked to ensure accuracy in positioning the mirrors. A green 2000mw laser is used to calibrate the laser beam at two distances (before focal point, and after it). This action is performed in complete darkness.
The slides are then mounted to the top tier with more double adhesive tape to ensure even more vibration protection and the definitive lasers are mounted alongside with the photo-resistors.
One last aspect. The light coming out the laser diode is partially polarized. In order to make this prototype work, we need totally polarized light. Since I don’t want any moving parts, I came up with a solution (derived from the research paper) called Brewster’s Polarization.
When light encounters a boundary between two media with different refractive indices, some of it is usually reflected as shown in the figure above. The fraction that is reflected is described by the Fresnel equations, and is dependent upon the incoming light's polarization and angle of incidence.
The physical mechanism for Brewster Polarization can be qualitatively understood from the manner in which electric dipoles in the media respond to p-polarized light. One can imagine that light incident on the surface is absorbed, and then re-radiated by oscillating electric dipoles at the interface between the two media.
The polarization of freely propagating light is always perpendicular to the direction in which the light is travelling. The dipoles that produce the transmitted (refracted) light oscillate in the polarization direction of that light. These same oscillating dipoles also generate the reflected light.
Figure 0.9 – Brewster Angle
However, dipoles do not radiate any energy in the direction of the dipole moment. If the refracted light is p-polarized and propagates exactly perpendicular to the direction in which the light is predicted to be specularly reflected, the dipoles point along the specular reflection direction and therefore no light can be reflected.
For a glass medium (n2 ≈ 1.5) in air (n1 ≈ 1), Brewster's angle for visible light is approximately 56°, while for an air-water interface (n2 ≈ 1.33), it is approximately 53°. Since the refractive index for a given medium changes depending on the wavelength of light, Brewster's angle will also vary with wavelength.
The phenomenon of light being polarized by reflection from a surface at a particular angle was first observed by Étienne-Louis Malus in 1808. He attempted to relate the polarizing angle to the refractive index of the material, but was frustrated by the inconsistent quality of glasses available at that time. In 1815, Brewster experimented with higher-quality materials and showed that this angle was a function of the refractive index, defining Brewster's law.
Brewster's angle (Figure 0.9) is often referred to as the "polarizing angle", because light that reflects from a surface at this angle is entirely polarized perpendicular to the incident plane ("s-polarized") A glass plate or a stack of plates placed at Brewster's angle in a light beam can, thus, be used as a polarizer. The concept of a polarizing angle can be extended to the concept of a Brewster wavenumber to cover planar interfaces between two linear bianisotropic materials. In the case of reflection at Brewster's angle, the reflected and refracted rays are mutually perpendicular.
For magnetic materials, Brewster's angle can exist for only one of the incident wave polarizations, as determined by the relative strengths of the dielectric permittivity and magnetic permeability. This has implications for the existence of generalized Brewster angles for dielectric metasurfaces.
In my case, this angle is implemented by directing the laser beam at an angle with the first dielectric mirror (Figure 0.4).
Programable circuits
My programable platform of choice is Arduino, for its functionality, versatility and efficiency in integration with any type of technology.
In this case I used 3 ARDUINO UNO, of which scheme may be found below at Figure 0.12.
My choice for LAN adaptors was ENC28J60 as it is simple to use, cheap and intuitive. In the Figure 0.10 above, you can depict its functioning scheme.
Figure 0.11 – LAN Adaptor
Figure 0.12 – ARDUINO UNO SCHEME
The LAN adapter is connected using Dupont cables to the UNO according to the following scheme: d12 – SO, d11 – ST, d13 – SCK, d8 – CS, 5V – 5V, GND – GND.
I connected the D+, D-, SGN and DO to the respective pins on my laser modules and photo-resistors. Afterwards I connected the SSR module from the actuator (Figure 0.13) to the actuator programable board and hooked up the 220V AC power socket.
As a quick fix, still making sure that my Uno is not overloaded with power consumption. I plugged in the 2x5V pins to the power railing of a breadboard, thus supplying with power every single component. A quick note here: the LAN card was mounted on a separate rail to prevent overloading and irreparable damage to the sensors and lasers in case of an event.
I mounted the switch and glued everything in place. Afterwards I cabled all the boards and hooked them up to the switch. Nevertheless the most important step, I set up the powerlines.
Final prototype ensemble result: Figure 0.14, Figure 0.15, Figure 0.16..
Figure 0.14 – Lower Tier Assembled
Figure 0.15 – Top Tier Assembled
Figure 0.16 – QUALIFY Assembled
5.2. SOFTWARE
On the software side, I want only to trace some outlines here, regarding the aspects implemented in the next chapter:
Implementing the embedded applications for QUALIFY;
Implementing the embedded application for LiFi Actuator;
Implementing the Server application for QUALIFY;
Implementing the server modules for LAN and Wifi communications;
Implementing the TPM;
Implementing the Geolocation tracker (LSTM);
Implementing the Attack Recognition (LSTM);
Implementing the cryptology core;
Implementation
C++ Arduino application for LiFi Actuator
#include <UIPEthernet.h> // calling the library
#define SSR 2 // define pin 2 for SSR signal
EthernetUDP udp; // open a new UDP connection
void setup() { // open the setup session
pinMode(SSR, OUTPUT); // set the digital pin 2 mode to output
Serial.begin(9600); // begin serial communication at 9600 bauds frequency
uint8_t mac[6] = {0x00,0x01,0x02,0x03,0x04,0x05}; // define a unique address
Ethernet.begin(mac,IPAddress(192,168,1,5)); // open a new communication channel
int port = udp.begin(8051); // open up a port for communication
} // close the setup session
void loop() { // begin the iteration field
//check for new udp-packet:
int size = udp.parsePacket(); // gather the size of the packet if any
int SYNC = 50; // define refresh ratio (clock)
if (size > 0) { // check if there are any packages
do // execute
{
char* msg = (char*)malloc(size+1); // get the input character
int len = udp.read(msg,size+1); // get the length of the stream
int len_c = len; // copy the length
msg[len]=0; // set the first character of the stream to NULL
do{ // execute
if(msg[len-len_c] == '1'){ // check if character is 1 and if so
digitalWrite(SSR, HIGH); // open the SSR
delay(SYNC); // await 50ms
digitalWrite(SSR, LOW); // close the SSR
delay(SYNC); // await 50ms
} // close condition
if(msg[len-len_c] == '0'){ // check if character is 0 and if so
digitalWrite(SSR, LOW); // keep the SSR closed
delay(SYNC); // await 50ms
digitalWrite(SSR, LOW); // keep the SSR closed
delay(SYNC); // await again 50ms
}// close condition
len_c–; // flush the current character
}while(len_c); // execute while there are remaining characters
free(msg); // flush the message memory
} // execute while any udp pachet with size is available
while ((size = udp.available())>0);
//finish reading this packet:
udp.flush(); // erase the usp channel
int port; // define new empty port
do // execute
{
//send new packet back to ip/port of client. This also
//configures the current connection to ignore packets from
//other clients!
port = udp.beginPacket(udp.remoteIP(),udp.remotePort()); // begin new transmission
//beginPacket fails if remote ethaddr is unknown. In this case an
//arp-request is send out first and beginPacket succeeds as soon
//the arp-response is received.
}
while (!port); // execute while there is no port active
port= udp.endPacket(); // end the communication channel
udp.stop(); // stop the udp protocol and
//restart with new connection to receive packets from other clients
}
}
C++ Arduino QUALIFY Laser and Sensors
After receiving the packets, send out the signals from the previous batch. The implementation of this algorithm is similarly to the one before, with a minor completion:
[…]
udp.flush(); // erase the usp channel
int port; // define new empty port
#define check 4
#define transmit 3
void setup(){
[…]
pinMode(check, OUTPUT);
pinMode(transmit, OUTPUT);\
[…]
}
do // execute
{
//send new packet back to ip/port of client. This also
//configures the current connection to ignore packets from
//other clients!
port = udp.beginPacket(udp.remoteIP(),udp.remotePort());
udp.write(check, transmit); // send the data gathered from sensors
//beginPacket fails if remote ethaddr is unknown. In this case an
//arp-request is send out first and beginPacket succeeds as soon
//the arp-response is received.
}
while (!port); // execute while there is no port active
port= udp.endPacket(); // end the communication channel
udp.stop(); // stop the udp protocol and
//restart with new connection to receive packets from other clients
}
Python – code snippet – send packets to QUALIFY
import socket # get sockets library
ip = js.call(‘igetIp’) # call the js main program to acquire IP
port = js.call(‘openPort’) # call the js main program to open a port
signal = js.call(‘tpmSignal’) # call the js main program to tpm
while signal: #execute while is signal
UDP_IP = "ip" # write IP
UDP_PORT = port #write port
MESSAGE = str(signal) # write the signal
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP initialize the socket
sock.sendto(bytes(MESSAGE, "utf-8"), (UDP_IP, UDP_PORT)) # send the message
Python – code snippet – TPM
def __init__(self, k, n, l): # initialize class
self.k = k # hidden neurons
self.n = n # input neurons
self.l = l # weight range
self.W = np.random.randint(-l, l + 1, [k, n]) # randomize weight
def get_output(self, X): # return sign
k = self.k # hidden neurons
n = self.n # input
W = self.W # weight
X = X.reshape([k, n]) # transform W over X
sigma = np.sign(np.sum(X * W, axis=1)) #compute sign
tau = np.prod(sigma) # get output
self.X = X # copy input vector
self.sigma = sigma # copy sign
self.tau = tau #copy output
return tau #return output
JS – code snippet – integrate Python
var get = require("child_process").spawn; // create child process
var thread = spawn('python',["./../sendStream.py", ip, port]); // call thread
const express = require('express') // open express js libraries
const app = express() // open new app
gateway = spawn('python',["./../getip.py", ip]); // call thread
port = spawn('python',["./../port.py", ip]); // call thread
NodeJS – code snippet – ExpressJS Server
app.get('/', function (req, res) { // new function
res.send(gateway) //send IP
})
app.listen(port, function () {
// start listening on port „port”
})
Prototype Usage
The client registers an authentication request on server.
The server open up all the necessary ports and sends QUALIFY an authentication request.
The server constantly requires a location update (coordinates) from the client
The client’s position and physical address is processed in the cognitive core.
QUALIFY gets an authentication key under the form of a signal.
QUALIFY processes the key under quantum transformations and then transmits it to LiFi Actuator and server.
The client interprets the light signal and processes the input in TPM.
The server and the client synchronize their TPMs over WiFi.
Once synchronized, the QUALIFY receives the stop authentication signal.
The actuator is stopped.
The processed synchronized values are send into GANN to encrypt messages.
For each participant, the server reserves an instance of TPM as well as a dedicated folder with encryption buffer starter.
If any threat occurs, the cognitive core will shut down the system and it will rearrange the MAC, IP, Ports rendering the system invisible.
Conclusions and Further R&D
Finally after a long journey inside cryptosystems, light particles, silver and a whole bunch of circuits, I can finally say that I achieved my goals. I understood the main problems with authentication and cryptology and I’ve researched my way out, up on a list of solutions. Sure, neither of them are easy to implement (as nor it was easy to research upon), but it had worth the effort.
This research paper was supposed to outcome a prototype which can encrypt data (either for authentication – which indeed was the case, either for message encryption). The outcome of this project has little to no relevance now – as it becames really relevant with the serialization of quantum computers and LiFi- off the shelves.
This is not a product! This is an academic proceeding on the cutting edge verge of computer science nowadays – sure, as cutting edge as a bachelor student may get. This is a work of passion. Passion for discovery, passion for computer science, passion for academics. And this passion will further this work (and not only this one) far away onto the peaks of knowledge.
Furthermore, I would like to integrate the components better, to reinforce the prototype stand to outstand the bearing of transportation without damages. I would like to implement a cryptology algorithm upon it (not just authentication).
Also as a direction of further development it will be interesting to adapt passive light modulation technologies (e.g. LCD. Scapyard Monitor Display) to modulate data for VLC transmission – given the analogy of serial and parralel communications, there would be a whole lot more to discover when a single iteration would transmit 1024×2048 bits.
In the end we should look forward to the future – a future in whick we contribute to the safety of mankind.
DECLARAȚIE DE AUTENTICITATE
A LUCRĂRII DE FINALIZARE A STUDIILOR
Subsemnatul PANDI ALEXANDRU legitimat cu CI seria TZ nr. 056682, CNP 1950523350067, autorul lucrării “NEURAL QUANTUM CRYPTOLOGY AND AUTHENTICATION, DISPENSED IN LIGHT FIDELITY APPLICATIONS” elaborată în vederea susținerii examenului de finalizare a studiilor de LICENȚĂ, organizat de către Facultatea AUTOMATICĂ ȘI CALCULATOARE, DEPARTAMENTUL DE INFORMATICĂ ID din cadrul UNIVERSITĂȚII “POLITEHNICA” DIN TIMIȘOARA, sesiunea IULIE a anului universitar 2016 – 2017, luând în considerare conținutul art. 39 din RODPI – UPT, declar pe proprie răspundere, că această lucrare este rezultatul propriei activități intelectuale, nu conține porțiuni plagiate, iar sursele bibliografice au fost folosite cu respectarea legislației române și a convențiilor internaționale privind drepturile de autor.
Timișoara,
Data
27 IULIE 2017
Semnătura
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: CRIPTOLOGIE ȘI AUTENTIFICARE NEURAL-CUANTICĂ DISTRIBUITĂ ÎN APLICAȚII BAZATE PE LIGHT FIDELITY [304693] (ID: 304693)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
