Smartphone sensors can be leveraged by malicious apps for a plethora of different attacks, which can also be deployed by malicious websites through the HTML5 WebAPI. In this paper we provide a comprehensive evaluation of the multifaceted threat that mobile web browsing poses to users, by conducting a large-scale study of mobile-specific HTML5 WebAPI calls used in the wild. We build a novel testing infrastructure consisting of actual smartphones on top of a dynamic Android app analysis framework, allowing us to conduct an end-to-end exploration. Our study reveals the extent to which websites are actively leveraging the WebAPI for collecting sensor data, with 2.89% of websites accessing at least one mobile sensor. To provide a comprehensive assessment of the potential risks of this emerging practice, we create a taxonomy of sensor-based attacks from prior studies, and present an in-depth analysis by framing our collected data within that taxonomy. We find that 1.63% of websites could carry out at least one of those attacks. Our findings emphasize the need for a standardized policy across browsers and the ability for users to control what sensor data each website can access.
Android’s app ecosystem relies heavily on third-party libraries as they facilitate code development and provide a steady stream of revenue for developers. However, while Android has moved towards a more fine-grained run time permission system, users currently lack the required resources for deciding whether a specific permission request is actually intended for the app itself or is requested by possibly dangerous third-party libraries.
In this paper we present Reaper, a novel dynamic analysis system that traces the permissions requested by apps in real time and distinguishes those requested by the app’s core functionality from those requested by third-party libraries linked with the app. We implement a sophisticated UI automator and conduct an extensive evaluation of our system’s performance and find that Reaper introduces negligible overhead, rendering it suitable both for end users (by integrating it in the OS) and for deployment as part of an official app vetting process.
Our study on over 5K popular apps demonstrates the large extent to which personally identifiable information is being accessed by libraries and highlights the privacy risks that users face. We find that an impressive 65% of the permissions requested do not originate from the core app but are issued by linked third-party libraries, 37.3% of which are used for functionality related to ads, tracking, and analytics. Overall, Reaper enhances the functionality of Android’s run time permission model without requiring OS or app modifications, and provides the necessary contextual information that can enable users to selectively deny permissions that are not part of an app’s core functionality.
The exposure of location data constitutes a significant privacy risk to users as it can lead to de-anonymization, the inference of sensitive information, and even physical threats. In this paper we present LPAuditor, a tool that conducts a comprehensive evaluation of the privacy loss caused by publicly available location metadata. First, we demonstrate how our system can pinpoint users’ key locations at an unprecedented granularity by identifying their actual postal addresses. Our experimental evaluation on Twitter data highlights the effectiveness of our techniques which outperform prior approaches by 18.9%-91.6% for homes and 8.7%-21.8% for workplaces. Next we present a novel exploration of automated private information inference that uncovers “sensitive” locations that users have visited (pertaining to health, religion, and sex/nightlife). We find that location metadata can provide additional context to tweets and thus lead to the exposure of private information that might not match the users’ intentions.
We further explore the mismatch between user actions and information exposure and find that older versions of the official Twitter apps follow a privacy-invasive policy of including precise GPS coordinates in the metadata of tweets that users have geotagged at a coarse-grained level (e.g., city). The implications of this exposure are further exacerbated by our finding that users are considerably privacy-cautious in regards to exposing precise location data. When users can explicitly select what location data is published, there is a 94.6% reduction in tweets with GPS coordinates. As part of current efforts to give users more control over their data, LPAuditor can be adopted by major services and offered as an auditing tool that informs users about sensitive information they (indirectly) expose through location metadata.
In this paper, we demonstrate the powerful capabilities that modern browser APIs provide to attackers by presenting MarioNet: a framework that allows a remote malicious entity to control a visitor’s browser and abuse its resources for unwanted computation or harmful operations, such as cryptocurrency mining, password-cracking, and DDoS. MarioNet relies solely on already available HTML5 APIs, without requiring the installation of any additional software. In contrast to previous browserbased botnets, the persistence and stealthiness characteristics of MarioNet allow the malicious computations to continue in the background of the browser even after the user closes the window or tab of the initial malicious website. We present the design, implementation, and evaluation of a prototype system, MarioNet, that is compatible with all major browsers, and discuss potential defense strategies to counter the threat of such persistent inbrowser attacks. Our main goal is to raise awareness regarding this new class of attacks, and inform the design of future browser APIs so that they provide a more secure client-side environment for web applications.
Cybercriminals use Return Oriented Programming techniques to attack systems and IoT devices. While defenses have been developed, not all of them are applicable to constrained devices. We present Shakedown, which is a compile-time randomizing build tool which creates several versions of the binary, each with a distinct memory layout. An attack developed against one device will not work on another device which has a different memory layout. We tested Shakedown on an industrial IoT device and shown that its normal functionality remained intact while an exploit was blocked.
SMEs constitute a very large part of the economy in every country, and they play an important role in economic growth and social development. SMEs are frequent targets of cybersecurity attacks similar to large enterprises. However, unlike large enterprises, SMEs mostly have limited capabilities regarding cybersecurity practices. Given the increasing cybersecurity risks and the large impact that the risks may bring to the SMEs, assessing and improving the cybersecurity capabilities is crucial for SMEs for sustainability.
This research aims to provide an approach for the SMEs for assessing and improving their cybersecurity capabilities by integrating key elements from existing industry standards.
Critical infrastructures are important assets for everyday life and well-being of the people. People can be affected dramatically if critical infrastructures are vulnerable and not protected against various threats. Given the increasing cybersecurity risks and the large impact that these risks may bring to the critical infrastructures, assessing and improving the cybersecurity capabilities of the service providers and the administrators is crucial for sustainability.
This research aims to provide a questionnaire model for assessing and improving cybersecurity capabilities based on industry standards. Another aim of this research is to provide service providers and the administrators of the critical infrastructures personalized guidance and an implementation plan for cybersecurity capability improvement.
Many applications have security vulnerabilities that can be exploited. It is practically impossible to find all of them due to the NP-complete nature of the testing problem. Security solutions provide defenses against these attacks through continuous application testing, fast-patching of vulnerabilities, automatic deployment of patches, and virtual patching detection techniques deployed in network and endpoint security tools. These techniques are limited by the need to find vulnerabilities before the ‘black hats’. We propose an innovative technique to virtually patch vulnerabilities before they are found. We leverage testing techniques for supervised-learning data generation, and show how artificial intelligence techniques can use this data to create predictive deep neural-network models that read an application’s input and predict in real time whether it is a potential malicious input. We set up an ahead-of-threat experiment in which we generated data on old versions of an application, and then evaluated the predictive model accuracy on vulnerabilities found years later. Our experiments show ahead-of-threat detection on LibXML2 and LibTIFF vulnerabilities with 91.3% and 93.7% accuracy, respectively. We expect to continue work on this field of research and provide ahead-of-threat virtual patching for more libraries. Success in this research can change the current state of endless racing after application vulnerabilities and put the defenders one step ahead of the attackers.
The ability to analyze network threats is very important in security research. Traditional approaches, involving sandboxing technology are limited to simulating a single host, missing local network attacks. This issue is addressed by designing a threat analysis framework that uses software-deﬁned networking for simulating arbitrary networks. The presented system offers ﬂexibility, allowing a security researcher to deﬁne a virtual network that is able to capture malicious actions and to be restored to the initial state afterwards. Both the framework design and common usage scenarios are described. By providing this framework, we aim to ease the analysis effort in combating cyberthreats.
Small and medium enterprises (SMEs) play a decisive role in EU economy; however, they are attractive targets for cyber-attacks. Since they have specific characteristics, less security, and fewer resources for cybersecurity measures than large companies.
The article describes the SMESEC project. SMESEC develops a tailor-made cybersecurity framework for SMEs which considers both technical solutions and human-organisational aspects. Regarding SMESEC use-case partners’ requirements and feedback, it provides a state-of-the-art cybersecurity framework, cost-effective solutions and cybersecurity awareness and training courses. In the development phase, we have considered the great importance of usability and automation, cyber situational awareness and control for end-users, human factors in the design process, and current best practices and standards related to SMEs. This framework takes account of the use-case partner’s cybersecurity requirements through an innovative process to integrate various solutions working in an orchestral way. Also, the future innovative approaches to SMESEC’s tools are prioritized based on increasing simplicity of security tools, increasing protection level, cost-effectiveness, supporting training and awareness, and rising interconnection.
SMESEC intends to be a holistic security framework due to growth in the number of SMEs willing to tackle their cyber-security issues. Thus, the SMESEC principal objectives are: developing an automated cybersecurity assessment engine, offering relevant feedback to SMEs regarding their cybersecurity behaviour and vulnerabilities, and aligning SMESEC innovations with international links and, in turn, providing inexpensive and effective security recommendations.
As cities gradually introduce intelligence in their core services and infrastructure thus becoming “smart cities”, they are deploying new Information Technology devices in the urban grid that are interconnected to a broad network. The main focus of widely implemented smart cities’ services was the operation of sensors and smart devices across city areas that need low energy consumption and high connectivity. However, as 5G technologies are gradually been adopted in the smart city infrastructure thus solving that problem, the fundamental issue of addressing security becomes dominant.
While latest network topologies and standards include security functions thus giving an illusion of security, there is little focus on the fact that many smart city end nodes cannot realize all security specifications without additional help.
In this paper, we discuss briefly smart city security issues and focus on problem and security requirement that need to be addressed in the smart city end nodes, the sensors, and actuators deployed within the city’s grid. In this paper, attacks that cannot be thwarted by traditional cybersecurity solutions are discussed and countermeasures based on hardware are suggested in order to achieve a high level of trust. Also, the danger of microarchitectural and side channel attacks on these devices is highlighted and protection approaches are discussed.
The problem of fast items retrieval from a ﬁxed collection is often encountered in most computer science areas, from operating system components to databases and user interfaces. We present an approach based on hash tables that focuses on both minimizing the number of comparisons performed during the search and minimizing the total collection size. The standard open-addressing double-hashing approach is improved with a non-linear transformation that can be parametrized in order to ensure a uniform distribution of the data in the hash table. The optimal parameter is determined using a genetic algorithm. The paperresults showthat near-perfecthashing isfaster thanbinary search, yet uses less memory than perfect hashing, being a good choice for memory-constrained applications where search time is also critical.
Small and medium-sized enterprises (SME) have become the weak spot of our economy for cyber attacks. These companies are large in number and often do not have the controls in place to prevent successful attacks, respectively are not prepared to systematically manage their cybersecurity capabilities. One of the reasons for why many SME do not adopt cybersecurity is that developers of cybersecurity solutions understand little the SME context and the requirements for successful use of these solutions.
We elicit requirements by studying how cybersecurity experts provide advice to SME. The experts’ recommendations offer insights into what important capabilities of the solution are and how these capabilities ought to be used for mitigating cybersecurity threats. The adoption of a recommendation hints at a correct match of the solution, hence successful consideration of requirements. Abandoned recommendations point to a misalignment that can be used as a source to inquire missed requirements. Re-occurrence of adoption or abandonment decisions corroborate the presence of requirements. This poster describes the challenges of SME regarding cybersecurity and introduces our proposed approach to elicit requirements for cybersecurity solutions. The poster describes CYSEC, our tool used to capture cybersecurity advice and help to scale cybersecurity requirements elicitation to a large number of participating SME. We conclude by outlining the planned research to develop and validate CYSEC.
In this paper, we present the results of a large-scale analysis of open HTTP proxies, focusing on determining the extent to which user traffic is manipulated while being relayed. We have designed a methodology for detecting proxies that, instead of passively relaying traffic, actively modify the relayed content. Beyond simple detection, our framework is capable of macroscopically attributing certain traffic modifications at the network level to well-defined malicious actions, such as ad injection, user fingerprinting, and redirection to malware landing pages.
Our study reveals the true incentives of many of the publicly available web proxies. Our findings raise several concerns, as we uncover multiple cases where users can be severely affected by connecting to an open proxy. As a step towards protecting users against unwanted content modification, we built a service that leverages our methodology to collect and probe public proxies automatically, and generates a list of safe proxies that do not perform any content modification, on a daily basis.