IEEE 2021:A Flexible Access Control with User Revocation in Fog-Enabled Cloud Computing Abstract - The major challenging task in the fog-enabled cloud computing paradigm is to ensure the security for accessing the data through cloud and fog nodes. To solve this challenge, a Flexible Access Control using Elliptic Curve Cryptography (FAC-ECC) protocol has been developed in which the user data are encrypted by multiple asymmetric keys. Such keys are handled by both users and fog nodes. Also, data access is controlled by encrypting the data through the user. However, the main problem is to guarantee the privacy and security of resources after processing of User Revocation (UR) by data owners. The issue of UR is needed to consider for satisfying the dynamic change of user access in different applications like healthcare systems, e-commerce, etc. Therefore in this article, a FAC-UR-ECC protocol is proposed to control the data access and realize the UR in fog-enabled cloud systems. In this protocol, a revocable key aggregatebased cryptosystem is applied in the fog-cloud paradigm. It is an extension of the key-aggregate cryptosystem such that a user is revoked if his/her credential is expired. First, the subset-cover model is combined into FAC-ECC protocol to design an efficient revocable key-aggregate encryption depending on multi-linear maps which realizes the user’s access control and revocation. It can simplify the user’s key management efficiently and delegate various clients with decryption permission. Also, it can accomplish revocation of user access privileges and the FAC efficiently. By using this protocol, both the user’s secret key and the ciphertext are preserved in a fixed size. The security of accessing the data is highly enhanced by updating the ciphertext through the data owners successfully. At last, the experimental results exhibit the efficiency of FAC-UR-ECC compared to the FAC-ECC protocol.
IEEE 2021:Identity-Based Privacy Preserving Remote Data Integrity Checking for Cloud Storage Abstract: Although cloud storage service enables people easily maintain and manage amounts of data with lower cost, it cannot ensure the integrity of people’s data. In order to audit the correctness of the data without downloading them, many remote data integrity checking (RDIC) schemes have been presented. Most existing schemes ignore the important issue of data privacy preserving and suffer from complicated certificate management derived from public key infrastructure. To overcome these shortcomings, this article proposes a new Identity-based RDIC scheme that makes use of homomorphic verifiable tag to decrease the system complexity. The original data in proof are masked by random integer addition, which protects the verifier from obtaining any knowledge about the data during the integrity checking process. Our scheme is proved secure under the assumption of computational Diffie–Hellman problem. Experiment result exhibits that our scheme is very efficient and feasible for real-life applications.
IEEE 2021:Privacy-Preserving Data Encryption Strategy for Big Data in Mobile Cloud Computing Abstract: Privacy has become a considerable issue when the applications of big data are dramatically growing in cloud computing. The benefits of the implementation for these emerging technologies have improved or changed service models and improve application performances in various perspectives. However, the remarkably growing volume of data sizes has also resulted in many challenges in practice. The execution time of the data encryption is one of the serious issues during the data processing and transmissions. Many current applications abandon data encryptions in order to reach an adoptive performance level companioning with privacy concerns. In this paper, we concentrate on privacy and propose a novel data encryption approach, which is called Dynamic Data Encryption Strategy (D2ES). Our proposed approach aims to selectively encrypt data and use privacy classification methods under timing constraints. This approach is designed to maximize the privacy protection scope by using a selective encryption strategy within the required execution time requirements. The performance of D2ES has been evaluated in our experiments, which provides the proof of the privacy enhancement.
IEEE 2021:CLASS: Cloud Log Assuring Soundness and Secrecy Scheme for Cloud Forensics Abstract: User activity logs can be a valuable source of information in cloud forensic investigations; hence, ensuring the reliability and integrity of such logs is crucial. Most existing solutions for secure logging are designed for conventional systems rather than the complexity of a cloud environment. In this paper, we propose the Cloud Log Assuring Soundness and Secrecy (CLASS) process as an alternative scheme for the securing of logs in a cloud environment. In CLASS, logs are encrypted using the individual user’s public key so that only the user is able to decrypt the content. In order to prevent unauthorized modification of the log, we generate proof of past log (PPL) using Rabin’s fingerprint and Bloom filter. Such an approach reduces verification time significantly. Findings from our experiments deploying CLASS in OpenStack demonstrate the utility of CLASS in a real-world context.
IEEE 2021: Crypt -DAC : Cryptographically Enforced Dynamic Access Control in the Cloud Abstract: Enabling cryptographically enforced access controls for data hosted in untrusted cloud is attractive for many users and organizations. However, designing efficient cryptographically enforced dynamic access control system in the cloud is still challenging. In this paper, we propose Crypt-DAC, a system that provides practical cryptographic enforcement of dynamic access control. Crypt-DAC revokes access permissions by delegating the cloud to update encrypted data. In Crypt-DAC, a file is encrypted by a symmetric key list which records a file key and a sequence of revocation keys. In each revocation, a dedicated administrator uploads a new revocation key to the cloud and requests it to encrypt the file with a new layer of encryption and update the encrypted key list accordingly. Crypt-DAC proposes three key techniques to constrain the size of key list and encryption layers. As a result, Crypt-DAC enforces dynamic access control that provides efficiency, as it does not require expensive decryption/reencryption and uploading/re-uploading of large data at the administrator side, and security, as it immediately revokes access permissions. We use formalization framework and system implementation to demonstrate the security and efficiency of our construction.
Abstract:Cloud computing has become prevalent due to its nature of massive storage and vast computing capabilities. Ensuring a secure data sharing is critical to cloud applications. Recently, a number of identity-based broadcast proxy re-encryption (IB-BPRE) schemes have been proposed to resolve the problem. However, the IB-BPRE requires a cloud user (Alice) who wants to share data with a bunch of other users (e.g. colleagues) to participate the group shared key renewal process because Alice’s private key is a prerequisite for shared key generation. This, however, does not leverage the benefit of cloud computing and causes the inconvenience for cloud users. Therefore, a novel security notion named revocable identity-based broadcast proxy re-encryption (RIB-BPRE) is presented to address the issue of key revocation in this work. In a RIB-BPRE scheme, a proxy can revoke a set of delegates, designated by the delegator, from the re-encryption key. The performance evaluation reveals that the proposed scheme is efficient and practical.
IEEE 2021:Towards Deadline Guaranteed Cloud Storage Services Abstract: More and more organizations move their data and workload to commercial cloud storage systems. However,the multiplexing and sharing of the resources in a cloud storage system present unpredictable data access latency to tenants, which may make online data-intensive applications unable to satisfy their deadline requirements. Thus, it is important for cloud storage systems to provide deadline guaranteed services. In this paper, to meet a current form of service level objective (SLO) that constrains the percentage of each tenant’s data access requests failing to meet its required deadline below a given threshold, we build a mathematical model to derive the upper bound of acceptable request arrival rate on each server. We then propose a Deadline Guaranteed storage service (called DGCloud) that incorporates three basic algorithms. Its deadline-aware load balancing scheme redirects requests and creates replicas to release the excess load of each server beyond the derived upper bound. Its workload consolidation algorithm tries to maximally reduce servers while still satisfying the SLO to maximize the resource utilization. Its data placement optimization algorithm re-schedules the data placement to minimize the transmission cost of data replication. We further propose three enhancement methods to further improve the performance of DGCloud. A dynamic load balancing method allows an overloaded server to quickly offload its excess workload. A data request queue improvement method sets different priorities to the data responses in a server’s queue so that more requests can satisfy the SLO requirement. A wakeup server selection method selects a sleeping server that stores more popular data to wake up, which allows it to handle more data requests. Our trace-driven experiments in simulation and Amazon EC2 show the superior performance of DGCloud compared with previous methods in terms of deadline guarantees and system resource utilization, and the effectiveness of its individual algorithms.
Abstract: With the rapid
development of cloud computing services, more and more individuals and
enterprises prefer to outsource their data or computing to clouds. In order to
preserve data privacy, the data should be encrypted before outsourcing and it
is a challenge to perform searches over encrypted data. In this paper, we
propose a privacy-preserving multi-keyword ranked search scheme over encrypted
data in hybrid clouds, which is denoted as MRSE-HC. The keyword dictionary of
documents is clustered into balanced partitions by a bisecting k-means
clustering based keyword partition algorithm. According to the partitions, the
keyword partition based bit vectors are adopted for documents and queries which
are utilized as the index of searches. The private cloud filters out the
candidate documents by the keyword partition based bit vectors, and then the
public cloud uses the trapdoor to determine the result in the candidates.
Abstract: Mobile health (mHealth)
has emerged as a new patient centric model which allows real-time collection of
patient data via wearable sensors, aggregation and encryption of these data at mobile
devices, and then uploading the encrypted data to the cloud for storage and
access by healthcare staff and researchers. However, efficient and scalable
sharing of encrypted data has been a very challenging problem. In this paper,
we propose a Lightweight Sharable and Traceable (LiST) secure mobile health
system in which patient data are encrypted end-to-end from a patient’s mobile
device to data users.
IEEE 2020: Lightweight and Privacy-Preserving
ID-as-a-Service provisioning in Vehicular Cloud Computing
Abstract: Vehicular cloud
computing (VCC) is composed of multiple distributed vehicular clouds (VCs),
which are formed on-the-fly by dynamically integrating underutilized vehicular
resources including computing power, storage, and so on. Existing proposals for
identity-as-a-service (IDaaS) are not suitable for use in VCC due to limited
computing resources and storage capacity of onboard vehicle devices. In this
paper, we first propose an improved ciphertext-policy attribute-bas Utilizing
the improved CP-ABE scheme and the permissioned blockchain technology, we
propose a lightweight and privacy-preserving IDaaS architecture for VCC named
IDaaSoVCC.ed encryption (CPABE) scheme.
Abstract: Frequent item set
mining, which is the essential operation in association rule mining, is one of
the most widely used data mining techniques on massive datasets nowadays. With
the dramatic increase on the scale of datasets collected and stored with cloud
services in recent years, it is promising to carry this computation-intensive
mining process in the cloud. Amount of work also transferred the approximate
mining computation into the exact computation, where such methods not only
improve the accuracy also aimto enhance the efficiency. However, while mining
data stored on public clouds, it inevitably introduces privacy concerns on
sensitive datasets.
Abstract: High availability is one of the core properties of Infrastructure as a Service (IaaS) and ensures that users have anytime access to on-demand cloud services. However, significant variations of workflow and the presence of super-tasks, mean that heterogeneous workload can severely impact the availability of IaaS clouds. Although previous work has investigated global queues, VM deployment, and failure of PMs, two aspects are yet to be fully explored: one is the impact of task size and the other is the differing features across PMs such as the variable execution rate and capacity. To address these challenges we propose an attribute-based availability model of large scale IaaS developed in the formal modeling language CARMA. The size of tasks in our model can be a fixed integer value or follow the normal, uniform or log-normal distribution.
IEEE 2018: An Efficient and Privacy-Preserving Biometric Identification Scheme in Cloud Computing
ABSTRACT : Bio-metric identification has
become increasingly popular in recent years.With the development of cloud
computing, database owners are motivated to outsource the large size of
biometric data and identification tasks to the cloud to get rid of the
expensive storage and computation costs, which, however,brings potential
threats to users' privacy. In this paper, we propose an efficient and
privacy-preserving bio-metric identi cation outsourcing scheme. Specfically,
the biometric To execute a biometric identi cation,the database owner encrypts
the query data and submits it to the cloud. The cloud performs identification operations
over the encrypted database and returns the result to the database owner. A
thorough security analysis indicates that the proposed scheme is secure
even if attackers can forge identification requests and collude with the
cloud. Compared with previous protocols, experimental results show that the
proposed scheme achieves a better performance in both preparation and
identification procedures.
IEEE 2018: Secure Attribute-Based
Signature Scheme With Multiple Authorities for Blockchain in Electronic
Health Records Systems
ABSTRACT : Electronic Health Records (EHRs) are
entirely controlled by hospitals instead of patients, which complicates seeking medical advices from
different hospitals. Patients face a critical need to focus on the details of their own healthcare and
restore management of their own medical data. The rapid development of blockchain technology promotes population
healthcare, including medical records as well as patient-related data. This technology provides patients
with comprehensive, immutable records, and access to EHRs free from service providers and treatment
websites. In this paper, to guarantee the validity of EHRs encapsulated in blockchain, we present an
attribute-based signature scheme with multiple authorities, in which a patient endorses a message according to the
attribute while disclosing no information other than the evidence that he has attested to it. Furthermore, there
are multiple authorities without a trusted single or central one to generate and distribute public/private
keys of the patient, which avoids the escrow problem and conforms to the mode of distributed data storage in
the blockchain. By sharing the secret pseudorandom function seeds among authorities, this protocol resists
collusion attack out of N from N 1 corrupted authorities. Under the assumption of the computational bilinear
Dif e-Hellman, we also formally demonstrate that, in terms of the unforgeability and perfect privacy of the
attribute-signer, this attribute-based signature scheme is secure in the random oracle model. The comparison
shows the ef ciency and properties between the proposed method and methods propose
Learn more :Click here Click here for other domain projects
Learn more :Click here Click here for other domain projects
ABSTRACT : User activity logs can be a valuable source of information in cloud forensic investigations; hence, ensuring the reliability and integrity of such logs is crucial. Most existing solutions for secure logging are designed for conventional systems rather than the complexity of a cloud environment. In this paper, we propose the Cloud Log Assuring Soundness and Secrecy (CLASS) process as an alternative scheme for the securing of logs in a cloud environment. In CLASS, logs are encrypted using the individual user’s public key so that only the user is able to decrypt the content. In order to prevent unauthorized modification of the log, we generate proof of past log (PPL) using Rabin’s fingerprint and Bloom filter. Such an approach reduces verification time significantly. Findings from our experiments deploying CLASS in OpenStack demonstrate the utility of CLASS in a real-world context.
IEEE 2018: DROPS:
Division and Replication of Data in Cloud for Optimal Performance and Security
ABSTRACT : Outsourcing
data to a third-party administrative control, as is done in cloud computing,
gives rise to security concerns. The data compromise may occur due to
attacks by other users and nodes within the cloud. Therefore, high security
measures are required to protect data within the cloud.
However, the employed security strategy must also take into account the
optimization of the data retrieval time. In this paper,
we propose Division and Replication of Data in the Cloud for Optimal
Performance and Security (DROPS) that collectively
approaches the security and performance issues. In the DROPS methodology, we
divide a file into fragments, and replicate the
fragmented data over the cloud nodes. Each of the nodes stores only a single
fragment of a particular data file that ensures
that even in case of a successful attack, no meaningful information is revealed
to the attacker. Moreover, the nodes storing the
fragments, are separated with certain distance by means of graph T-coloring to
prohibit an attacker of guessing the locations of
the fragments. Furthermore, the DROPS methodology does not rely on the
traditional cryptographic techniques for the data
security; thereby relieving the system of computationally expensive
methodologies. We show that the probability to locate and
compromise all of the nodes storing the fragments of a single file is extremely
low. We also compare the performance of the DROPS
methodology with ten other schemes. The higher level of security with slight performance overhead was observed.
ABSTRACT : Cloud computing is a very useful solution
to many individual users and organizations. It can
provide many services based on different needs and requirements.
However, there are many issues related to the user data that
need to be addressed when using cloud computing. Among the most
important issues are: data ownership, data privacy, and
storage. The users might be satisfied by the services provided by
the cloud computing service providers, since they need not worry about
the maintenance and storage of their data. On the other
hand, they might be worried about unauthorized access to their
private data. Some solutions to these issues were proposed in
the literature, but they mainly increase the cost and processing
time since they depend on encrypting the whole data. In this
paper, we are introducing a cloud computing framework that
classifies the data based on their importance. In other words, more
important data will be encrypted with more secure encryption
algorithm and larger key sizes, while less important data might
even not be encrypted. This approach is very helpful in reducing
the processing cost and complexity of data storage and
manipulation since we do not need to apply the same sophisticated
encryption techniques to the entire users data. The results of
applying the proposed framework show improvement and efficiency
over other existing frameworks.
Learn more :Click here Click here for other domain projects
Learn more :Click here Click here for other domain projects
IEEE 2018: Privacy
Preserving Ranked Multi-Keyword Search for Multiple Data Owners in Cloud Computing
ABSTRACT : With the advent of cloud computing, it has become increasingly popular for data owners to outsource their data to public cloud servers while allowing data users to retrieve this data. For privacy concerns, secure searches over encrypted cloud data has motivated several research works under the single owner model. However, most cloud servers in practice do not just serve one owner; instead, they support multiple owners to share the benefits brought by cloud computing. In this paper, we propose schemes to deal with Privacy preserving Ranked Multi-keyword Search in a Multi-owner model (PRMSM). To enable cloud servers to perform secure search without knowing the actual data of both keywords and trapdoors, we systematically construct a novel secure search protocol. To rank the search results and preserve the privacy of relevance scores between keywords and files, we propose a novel Additive Order and Privacy Preserving Function family. To prevent the attackers from eavesdropping secret keys and pretending to be legal data users submitting searches, we propose a novel dynamic secret key generation protocol and a new data user authentication protocol. Furthermore, PRMSM supports efficient data user revocation.Extensive experiments on real-world datasets confirm the efficacy and efficiency of PRMSM.
ABSTRACT : Cloud computing is the latest technology in the field of distributed computing. It provides various online and on-demand services for data storage, network services, platform services and etc. Many organizations are unenthusiastic to use cloud services due to data security issues as the data resides on the cloud services provider’s servers. To address this issue, there have been several approaches applied by various researchers worldwide to strengthen security of the stored data on cloud computing. The Bi-directional DNA Encryption Algorithm (BDEA) is one such data security techniques. However, the existing technique focuses only on the ASCII character set, ignoring the non-English user of the cloud computing. Thus, this proposed work focuses on enhancing the BDEA to use with the Unicode characters.
IEEE 2018: Anonymous
Authentication for Secure Data Stored on Cloud with Decentralized Access
Control
ABSTRACT : Decentralized storage system for accessing data with anonymous authentication provides more secure user authentication, user revocation and prevents replay attacks. Access control is processed on decentralized KDCs it is being more secure for data encryption. Generated decentralized KDC's are then grouped by (KGC). Our system provides authentication for the user, in which only system authorized users are able to decrypt, view the stored information. User validations and access control scheme are introduced in decentralized, which is useful for preventing replay attacks and supports modification of data stored in the cloud. The access control scheme is gaining more attention because it is important that only approved users have access to valid examine. Our scheme prevents supports creation, replay attacks, reading and modify data stored in the cloud. We also address user revocation. The problems of validation, access control, privacy protection should be solved simultaneously.
Learn more :Click here Click here for other domain projects
IEEE 2018: Enabling
Identity-Based Integrity Auditing and Data Sharing with Sensitive Information
Hiding for Secure Cloud Storage
ABSTRACT : With cloud storage services, users can remotely store their data to the cloud and realize the data sharing with others. Remote data integrity auditing is proposed to guarantee the integrity of the data stored in the cloud. In some common cloud storage systems such as the Electronic Health Records (EHRs) system, the cloud file might contain some sensitive information. The sensitive information should not be exposed to others when the cloud file is shared. Encrypting the whole shared file can realize the sensitive information hiding, but will make this shared file unable to be used by others. How to realize data sharing with sensitive information hiding in remote data integrity auditing still has not been explored up to now. In order to address this problem, we propose a remote data integrity auditing scheme that realizes data sharing with sensitive information hiding in this paper. In this scheme, a sanitizer is used to sanitize the data blocks corresponding to the sensitive information of the file and transforms these data blocks’ signatures into valid ones for the sanitized file. These signatures are used to verify the integrity of the sanitized file in the phase of integrity auditing. As a result, our scheme makes the file stored in the cloud able to be shared and used by others on the condition that the sensitive information is hidden, while the remote data integrity auditing is still able to be efficiently executed. Meanwhile, the proposed scheme is based on identity-based cryptography, which simplifies the complicated certificate management. The security analysis and the performance evaluation show that the proposed scheme is secure and efficient.
Click here for other domain projects
ABSTRACT : For data analytics jobs running across geographically distributed datacenters, coflows have to go through the interdatacenter network over relatively low bandwidth and high cost links. In this case, optimizing cost-performance tradeoffs for such coflows becomes crucial. Ideally, decreasing coflow completion time (CCT) can significantly improve the network performance, meanwhile, reducing the transmission cost introduced by these coflows is another fundamental goal for datacenter operators. Unfortunately, minimizing both CCT and the transmission cost are conflicting objectives which cannot be achieved concurrently. Prior methods have significant limitations when exploring such tradeoffs, because they either merely decrease the average CCT or reduce the transmission cost independently. In this paper, we focus on a cost-performance tradeoff problem for coflows running across the inter-datacenter network. Specifically, we formulate an optimization problem, so as to minimize a combination of both the average CCT and the average transmission cost. This problem is inherently hard to solve due to the unknown information of future coflows. We therefore present Lever, an online coflowaware optimization framework, to balance these two conflicting objectives. Without any prior knowledge of future coflows, Lever has been proved to have a non-trivial competitive ratio in solving this cost-performance tradeoff problem. Results from large-scale simulations demonstrate that Lever can significantly reduce the average transmission cost, and at the same time, speed up the completion of these coflows, compared to state-of-the-art solutions.
Click here for other domain projects
IEEE 2018: Stability of Evolving Fuzzy Systems based on Data
Clouds
ABSTRACT :Evolving fuzzy systems (EFSs) are now well develope and widely used thanks to their ability to self-adapt both their structures and parameters online. Since the concept was firstly introduced two decades ago, many different types of EFSs have been successfully implemented. However, there are only very few works considering the stability of the EFSs, and thesestudies were limited to certain types of membership functions with specifically pre-defined parameters, which largely increases the complexity of the learning process. At the same time, stability analysis is of paramount importance for control applications and provides the theoretical guarantees for the convergence of the learning algorithms. In this paper, we introduce the stability proofof a class of EFSs based on data clouds, which are grounded at the AnYa type fuzzy systems and the recently introduced empirical data analysis (EDA) methodological framework. By employing data clouds, the class of EFSs of AnYa type considered in this work avoids the traditional way of defining membership functions for each input variable in an explicit manner and its learning process is entirely data-driven. The stability of the considered EFS of AnYa type is proven through the Lyapunov theory, and the proof of stability shows that the average identification error converges to a small neighborhood of zero. Although, the stability proof presented in this paper is specially elaborated for the considered EFS, it is also applicable to general EFSs. The proposed method is illustrated with Box-Jenkins gas furnace problem, one nonlinear system identification problem, Mackey-Glass time series prediction problem, eight real-world benchmark regression problems as well as a high frequency trading prediction problem. Compared with other EFSs, the numerical examples show that the considered EFS in this paper provides guaranteed stability
as well as a better approximation accuracy.
Click here for other domain projects
IEEE 2018: Anonymous and Traceable Group Data Sharing in Cloud Computing
Click here for other domain projects
IEEE 2018: Efficient and Expressive Keyword Search Over Encrypted Data in Cloud
ABSTRACT : Searchable encryption allows a cloud server to conduct keyword search over encrypted data on behalf of the data users without learning the underlying plaintexts. However, most existing searchable encryption schemes only support single or conjunctive keyword search, while a few other schemes that are able to perform expressive keyword search are computationally inefficient sinc they are built from bilinear pairings over the composite-order groups. In this paper, we propose an expressive public-key searchable encryption scheme in the prime-order groups, which allows keyword search policies (i.e., predicates, access structures) to be expressed in conjunctive, disjunctive or any monotonic Boolean formulas and achieves significant performance improvement over existing schemes. We formally define its security, and prove that it is selectively secure in the standard model. Also, we implement the proposed scheme using a rapid prototyping tool called Charm [37], and conduct several experiments to evaluate it performance. The results demonstrate that our scheme is much more efficient than the ones built over the composite-order groups.
IEEE 2017: Two-Factor Data Access Control With Efficient Revocation for Multi-Authority Cloud Storage Systems
ABSTRACT : Attribute-based encryption, especially for ciphertext-policy
attribute-based encryption, can fulfill the functionality of fine-grained
access control in cloud storage systems. Since users' attributes may be issued
by multiple attribute authorities, multi-authority ciphertext-policy
attribute-based encryption is an emerging cryptographic primitive for enforcing
attribute-based access control on outsourced data. However, most of the
existing multi-authority attribute-based systems are either insecure in
attribute-level revocation or lack of efficiency in communication overhead and
computation cost. In this paper, we propose anattribute-based access control
scheme with two-factor protection for multi-authority cloud storage systems. In
our proposed scheme, any user can recover the outsourced data if and only if
this user holds sufficient attribute secret keys with respect to the access
policy and authorization key in regard to the outsourced data. In addition, the
proposed scheme enjoys the properties of constant-size ciphertext and small
computation cost. Besides supporting the attribute-level revocation, our
proposed scheme allows data owner to carry out the user-level revocation. The
security analysis, performance comparisons, and experimental results indicate
that our proposed scheme is not only secure but also practical.Read More
IEEE 2017: FastGeo:
Efficient Geometric Range Queries on Encrypted Spatial Data
IEEE 2017: FastGeo:
Efficient Geometric Range Queries on Encrypted Spatial Data
IEEE 2017: Practical Privacy-Preserving Content-Based Retrieval in Cloud Image Repositories
ABSTRACT :Storage
requirements for visual data have been increasing in recent years, following
the emergence of many highly interactive multimedia services and applications
for mobile devices in both personal and corporate scenarios. This has been a
key driving factor for the adoption of cloud-based data outsourcing solutions.
However, outsourcing data storage to the Cloud also leads to new security
challenges that must be carefully addressed, especially regarding privacy. In
this paper we propose a secure framework for outsourced privacy-preserving
storage and retrieval in large shared image repositories. Our proposal is based
on IES-CBIR, a novel Image Encryption Scheme that exhibits Content-Based Image
Retrieval properties. The framework enables both encrypted storage and
searching using Content-Based Image Retrieval queries while preserving privacy
against honest-but-curious cloud administrators. We have built a prototype of
the proposed framework, formally analyzed and proven its security properties,
and experimentally evaluated its performance and retrieval precision. Read More
IEEE 2017: Temporal Task Scheduling With Constrained Service Delay for Profit Maximization in Hybrid Clouds
ABSTRACT :As
cloud computing is becoming growingly popular, consumers’ tasks around the
world arrive in cloud data centers. A private cloud provider aims to achieve
profit maximization by intelligently scheduling tasks while guaranteeing the
service delay bound of delay-tolerant tasks. However, the aperiodicity of arrival
tasks brings a challenging problem of how to dynamically schedule all arrival
tasks given the fact that the capacity of a private cloud provider is limited.
Previous works usually provide an admission control to intelligently refuse
some of arrival tasks. Nevertheless, this will decrease the throughput of a
private cloud, and cause revenue loss. This paper studies the problem of how to
maximize the profit of a private cloud in hybrid clouds while guaranteeing the
service delay bound of delay-tolerant tasks. We propose a profit maximization
algorithm (PMA) to discover the temporal variation of prices in hybrid clouds.
The temporal task scheduling provided by PMA can dynamically schedule all arrival
tasks to execute in private and public clouds. The sub problem in each
iteration of PMA is solved by the proposed hybrid heuristic optimization
algorithm, simulated annealing particle swarm optimization (SAPSO). Besides,
SAPSO is compared with existing baseline algorithms. Extensive simulation
experiments demonstrate that the proposed method can greatly increase the throughput
and the profit of a private cloud while guaranteeing the service delay bound.Read More
IEEE 2017: Optimizing Cloud-Service Performance: Efficient Resource Provisioning via Optimal Workload Allocation
ABSTRACT :Cloud computing is being widely
accepted and utilized in the business world. From the perspective of businesses
utilizing the cloud, it is critical to meet their customers’ requirements by
achieving service-level-objectives. Hence, the ability to accurately characterize
and optimize cloud-service performance is of great importance. In this paper a
stochastic multi-tenant framework is proposed to model the service of customer
requests in a cloud infrastructure composed of heterogeneous virtual machines.
Two cloudservice performance metrics are mathematically characterized, namely
the percentile and the mean of the stochastic response time of a customer
request, in closed form. Based upon the proposed multi-tenant framework, a
workload allocation algorithm, termed maxmin- cloud algorithm, is then devised
to optimize the performance of the cloud service. A rigorous optimality proof
of the max-min-cloud algorithm is also given. Furthermore, the
resource-provisioning problem in the cloud is also studied in light of the
max-min-cloud algorithm. In particular, an efficient resource-provisioning
strategy is proposed for serving dynamically arriving
customer requests. These findings can be used by businesses to build a better
understanding of how much virtual resource in the cloud they may need to meet
customers’ expectations subject to cost constraints. Read more
IEEE 2017: Live Data Analytics With Collaborative Edge and Cloud Processing in Wireless IoT Networks
ABSTRACT :Recently, big data analytics has received important attention in a variety of application domains including business, finance, space science, healthcare, telecommunication and Internet of Things (IoT). Among these areas, IoT is considered as an important platform in bringing people, processes, data and things/objects together in order to enhance the quality of our everyday lives. However, the key challenges are how to effectively extract useful features from the massive amount of heterogeneous data generated by resource-constrained IoT devices in order to provide real-time information and feedback to the endusers, and how to utilize this data-aware intelligence in enhancing the performance of wireless IoT networks. Although there are parallel advances in cloud computing and edge computing for addressing some issues in data analytics, they have their own benefits and limitations. The convergence of these two computing paradigms, i.e., massive virtually shared pool of computing and storage resources from the cloud and real time data processing by edge computing, could effectively enable live data analytics in wireless IoT networks. In this regard, we propose a novel framework for coordinated processing between edge and cloud computing/processing by integrating advantages from both the platforms. The proposed framework can exploit the network-wide knowledge and historical information available at the cloud center to guide edge computing units towards satisfying various performance requirements of heterogeneous wireless IoT networks. Starting with the main features, key enablers and the challenges of big data analytics, we provide various synergies and distinctions between cloud and edge processing. More importantly, we identify and describe the potential key enablers for the proposed edge-cloud collaborative framework, the associated key challenges and some interesting future research directions.
Read More
IEEE 2017: Optimizing Green
Energy, Cost, and Availability in Distributed Data Centers
ABSTRACT : Integrating renewable energy and ensuring high availability are among
two major requirements for geo distributed data centers. Availability is
ensured by provisioning spare capacity across the data centers to mask data
center failures (either partial or complete). We propose a mixed integer linear
programming formulation for capacity planning while minimizing the total cost
of ownership (TCO) for highly available, green, distributed data centers. We
minimize the cost due to power consumption and server deployment, while
targeting a minimum usage of green energy. Solving our model shows that
capacity provisioning considering green energy integration, not only lowers
carbon footprint but also reduces the TCO. Results show that up to 40% green
energy usage is feasible with marginal increase in the TCO compared to the
other cost-aware models.Read More
IEEE 2017: Cost Minimization
Algorithms for Data Center Management
ABSTRACT :Due to the increasing usage of cloud computing applications, it is important to minimize energy cost consumed by a data center, and simultaneously, to improve quality of service via data center management. One promising approach is to switch some servers in a data center to the idle mode for saving energy while to keep a suitable number of servers in the active mode for providing timely service. In this paper, we design both online and offline algorithms for this problem. For the offline algorithm, we formulate data center management as a cost minimization problem by considering energy cost, delay cost (to measure service quality), and switching cost (to change servers’ active/idle mode). Then, we analyze certain properties of an optimal solution which lead to a dynamic programming based algorithm. Moreover, by revising the solution procedure, we successfully eliminate the recursive procedure and achieve an optimal offline algorithm with a polynomial complexity.Read More
IEEE 2017: Vehicular Cloud Data
Collection for Intelligent Transportation Systems
ABSTRACT :The Internet of Things (IoT) envisions to connect billions of sensors
to the Internet, in order to provide new applications and services for smart
cities. IoT will allow the evolution of the Internet of Vehicles (IoV) from
existing Vehicular Ad hoc Networks (VANETs), in which the delivery of various
services will be offered to drivers by integrating vehicles, sensors, and
mobile devices into a global network. To serve VANET with computational
resources, Vehicular Cloud Computing (VCC) is recently envisioned with the
objective of providing traffic solutions to improve our daily driving. These
solutions involve applications and services for the benefit of Intelligent
Transportation Systems (ITS), which represent an important part of IoV. Data
collection is an important aspect in ITS, which can effectively serve online
travel systems with the aid of Vehicular Cloud (VC). In this paper, we involve
the new paradigm of VCC to propose a data collection model for the benefit of
ITS. We show via simulation results that the participation of low percentage of
vehicles in a dynamic VC is sufficient to provide meaningful data collection.Read More
IEEE 2017: RAAC: Robust and
Auditable Access Control with Multiple Attribute Authorities for Public Cloud
Storage
ABSTRACT :Data access control is a challenging issue in public cloud storage
systems. Ciphertext-Policy Attribute-Based En-cryption (CP-ABE) has been
adopted as a promising technique to provide flexible, fine-grained and secure
data access control for cloud storage with honest-but-curious cloud servers.
However, in the existing CP-ABE schemes, the single attribute authority must
execute the time-consuming user legitimacy verification and secret key
distribution, and hence it results in a single-point performance bottleneck
when a CP-ABE scheme is adopted in a large-scale cloud storage system. Users
may be stuck in the waiting queue for a long period to obtain their secret
keys, thereby resulting in low-efficiency of the system. Although
multi-authority access control schemes have been proposed, these schemes still
cannot overcome the drawbacks of single-point bottleneck and low efficiency,
due to the fact that each of the authorities still independently manages a
disjoint attribute set.Read More
IEEE 2017: Privacy-Preserving
Data Encryption Strategy for Big Data in Mobile Cloud Computing
ABSTRACT : Privacy has become a considerable issue when the applications of big
data are dramatically growing in cloud computing. The benefits of the
implementation for these emerging technologies have improved or changed service
models and improve application performances in various perspectives. However,
the remarkably growing volume of data sizes has also resulted in many
challenges in practice. The execution time of the data encryption is one of the
serious issues during the data processing and transmissions. Many current
applications abandon data encryptions in order to reach an adoptive performance
level companioning with privacy concerns. In this paper, we concentrate on
privacy and propose a novel data encryption approach, which is called Dynamic
Data Encryption Strategy (D2ES). Our proposed approach aims to selectively
encrypt data and use privacy classification methods under timing constraints.
This approach is designed to maximize the privacy protection scope by using a
selective encryption strategy within the required execution time requirements.Read More
IEEE 2017: Identity-Based Remote
Data Integrity Checking With Perfect Data Privacy Preserving for Cloud Storage
ABSTRACT : Remote data integrity checking (RDIC)
enables a data storage server, says a cloud server, to prove to a verifier that
it is actually storing a data owner’s data honestly. To date, a number of RDIC
protocols have been proposed in the literature, but most of the constructions
suffer from the issue of a complex key management, that is, they rely on the
expensive public key infrastructure (PKI), which might hinder the deployment of
RDIC in practice. In this paper, we propose a new construction of
identity-based (ID-based) RDIC protocol by making use of key- homomorphic
cryptographic primitive to reduce the system complexity and the cost for establishing
and managing the public key authentication framework in PKI-based RDIC schemes.
We formalize ID-based RDIC and its security model, including security against a
malicious cloud server and zero knowledge privacy against a third party
verifier. The proposed ID-based RDIC protocol leaks no information of the
stored data to the verifier during the RDIC process. The new construction is
proven secure against the malicious server in the generic group model and
achieves zero knowledge privacy against a verifier. Extensive security analysis
and implementation results demonstrate that the proposed protocol is provably
secure and practical in the real-world applications. Read More
IEEE 2017: Identity-Based Data
Outsourcing with Comprehensive Auditing in Clouds
ABSTRACT :Cloud storage system provides facilitative
file storage and sharing services for distributed clients. To address
integrity, controllable outsourcing and origin auditing concerns on outsourced
files, we propose an identity-based data outsourcing (IBDO) scheme equipped
with desirable features advantageous over existing proposals in securing
outsourced data. First, our IBDO scheme allows a user to authorize dedicated
proxies to upload data to the cloud storage server on her behalf, e.g., a
company may authorize some employees to upload files to the company’s cloud
account in a controlled way. The proxies are identified and authorized with
their recognizable identities, which eliminates complicated certificate
management in usual secure distributed computing systems. Second, our IBDO
scheme facilitates comprehensive auditing, i.e., our scheme not only permits
regular integrity auditing as in existing schemes for securing outsourced data,
but also allows to audit the information on data origin, type and consistence
of outsourced files.Read More
IEEE 2017: TAFC: Time and
Attribute Factors Combined Access Control on Time-Sensitive Data in Public
Cloud
ABSTRACT : The
new paradigm of outsourcing data to the cloud is a double-edged sword. On one
side, it frees up data owners from the technical management, and is easier for
the data owners to share their data with intended recipients when data are
stored in the cloud. On the other side, it brings about new challenges about privacy
and security protection. To protect data confidentiality against the
honest-but-curious cloud service provider, numerous works have been proposed to
support fine-grained data access control. However, till now, no efficient
schemes can provide the scenario of fine-grained access control together with
the capacity of time-sensitive data publishing. In this paper, by embedding the
mechanism of timed-release encryption into CP-ABE (Ciphertext- Policy
Attribute-based Encryption), we propose TAFC: a new time and attribute factors
combined access control on time sensitive data stored in cloud. Extensive
security and performance analysis shows that our proposed scheme is highly
efficient and satisfies
the security requirements for time-sensitive data storage in public cloud.Read More
IEEE 2017: Attribute-Based
Storage Supporting Secure Deduplication of Encrypted Data in Cloud
ABSTRACT :Attribute-based encryption (ABE) has
been widely used in cloud computing where a data provider outsources his/her
encrypted data to a cloud service provider, and can share the data with users
possessing specific credentials (or attributes). However, the standard ABE
system does not support secure deduplication, which is crucial for eliminating
duplicate copies of identical data in order to save storage space and network
bandwidth. In this paper, we present an attribute-based storage system with
secure deduplication in a hybrid cloud setting, where a private cloud is
responsible for duplicate detection and a public cloud manages the storage.
Compared with the prior data deduplication systems, our system has two
advantages. Firstly, it can be used to confidentially share data with users by
specifying access policies rather than sharing decryption keys. Secondly, it
achieves the standard notion of semantic security for data confidentiality
while existing systems only achieve it by defining a weaker security notion.
IEEE 2017: A
Collision-Mitigation Cuckoo Hashing Scheme for Large-scale Storage Systems
ABSTRACT : With the rapid growth of the amount of
information, cloud computing servers need to process and analyze large amounts
of high-dimensional and unstructured data timely and accurately. This usually
requires many query operations. Due to simplicity and ease of use, cuckoo
hashing schemes have been widely used in real-world cloud-related applications.
However due to the potential hash collisions, the cuckoo hashing suffers from
endless loops and high insertion latency, even high risks of re-construction of
entire hash table. In order to address these problems, we propose a
cost-efficient cuckoo hashing scheme, called MinCounter. The idea behind MinCounter
is to alleviate the occurrence of endless loops in the data insertion by selecting
unbusy kicking-out routes. MinCounter selects the “cold” (infrequently
accessed), rather than random, buckets to handle hash collisions. We further
improve the concurrency of the MinCounter scheme to pursue higher performance
and adapt to concurrent applications. MinCounter has the salient features of
offering efficient insertion and query services and delivering high performance
of cloud servers, as well as enhancing the experiences for cloud users. We have
implemented MinCounter in a large-scale cloud test bed and examined the
performance by using three real-world traces. Extensive experimental results
demonstrate the efficacy and efficiency of MinCounter.Read More
IEEE 2016: Reducing
Fragmentation for In-line Deduplication Backup Storage via Exploiting Backup
History and Cache Knowledge
ABSTRACT : In backup systems, the chunks of
each backup are physically scattered after deduplication, which causes a
challenging fragmentation problem. We observe that the fragmentation comes into
sparse and out-of-order containers. The sparse container decreases restore
performance and garbage collection efficiency, while the out-of-order container
decreases restore performance if the restore cache is small. In order to reduce
the fragmentation, we propose History-Aware Rewriting algorithm (HAR) and
Cache-Aware Filter (CAF). HAR exploits historical information in backup systems
to accurately identify and reduce sparse containers, and CAF exploits restore
cache knowledge to identify the out-of-order containers that hurt restore
performance. CAF efficiently complements HAR in datasets where out-of-order
containers are dominant. To reduce the metadata overhead of the garbage
collection, we further propose a Container-Marker Algorithm (CMA) to identify
valid containers instead of valid chunks. Our extensive experimental results
from real-world datasets show HAR significantly improves the restore
performance by 2.84-175.36 × at a cost of only rewriting 0.5-2.03 percent data.
IEEE 2016: Secure Data Sharing
in Cloud Computing Using Revocable-Storage Identity-Based Encryption
ABSTRACT :Cloud computing provides a flexible
and convenient way for data sharing, which brings various benefits for both the
society and individuals. But there exists a natural resistance for users to
directly outsource the shared data to the cloud server since the data often
contain valuable information. Thus, it is necessary to place cryptographically
enhanced access control on the shared data. Identity-based encryption is a
promising cryptographical primitive to build a practical data sharing system.
However, access control is not static. That is, when some user’s authorization
is expired, there should be a mechanism that can remove him/her from the
system. Consequently, the revoked user cannot access both the previously and
subsequently shared data. To this end, we propose a notion called
revocable-storage identity-based encryption (RS-IBE), which can provide the
forward/backward security of cipher text by introducing the functionalities of
user revocation and cipher text update simultaneously. Furthermore, we present
a concrete construction of RS-IBE, and prove its security in the defined
security model. The performance comparisons indicate that the proposed RS-IBE
scheme has advantages in terms of functionality and efficiency, and thus is
feasible for a practical and cost-effective data-sharing system. Finally, we
provide implementation results of the proposed scheme to demonstrate its
practicability.
IEEE 2016: Key-Aggregate
Searchable Encryption (KASE) for Group Data Sharing via Cloud Storage
ABSTRACT :The capability of selectively
sharing encrypted data with different users via public cloud storage may
greatly ease security concerns over inadvertent data leaks in the cloud. A key
challenge to designing such encryption schemes lies in the efficient management
of encryption keys. The desired flexibility of sharing any group of selected
documents with any group of users demands different encryption keys to be used
for different documents. However, this also implies the necessity of securely
distributing to users a large number of keys for both encryption and search,
and those users will have to securely store the received keys, and submit an
equally large number of keyword trapdoors to the cloud in order to perform
search over the shared data. The implied need for secure communication,
storage, and complexity clearly renders the approach impractical. In this
paper, we address this practical problem, which is largely neglected in the
literature, by proposing the novel concept of key-aggregate search able
encryption and instantiating the concept through a concrete KASE scheme, in
which a data owner only needs to distribute a single key to a user for sharing
a large number of documents, and the user only needs to submit a single
trapdoor to the cloud for querying the shared documents. The security analysis
and performance evaluation both confirm that our proposed schemes are provably
secure and practically efficient.
IEEE 2016: Public Integrity
Auditing for Shared Dynamic Cloud Data with Group User Revocation
ABSTRACT : The advent of the cloud
computing makes storage outsourcing becomes a rising trend, which promotes the
secure remote data auditing a hot topic that appeared in the research
literature. Recently some research considers the problem of secure and
efficient public data integrity auditing for shared dynamic data. However,
these schemes are still not secure against the collusion of cloud storage
server and revoked group users during user revocation in practical cloud
storage system. In this paper, we figure out the collusion attack in the
exiting scheme and provide an efficient public integrity auditing scheme with
secure group user revocation based on vector commitment and verifier-local
revocation group signature. We design a concrete scheme based on the our scheme
definition. Our scheme supports the public checking and efficient user
revocation and also some nice properties, such as confidently, efficiency,
countability and traceability of secure group user revocation. Finally, the
security and experimental analysis show that, compared with its relevant
schemes our scheme is also secure and efficient.
IEEE 2016: Secure Auditing and
Duplicating Data in Cloud
ABSTRACT : As the cloud computing
technology develops during the last decade outsourcing data to cloud service
for storage becomes an attractive trend, which benefits in sparing efforts on
heavy data maintenance and management. Nevertheless, since the outsourced cloud
storage is not fully trustworthy, it raises security concerns on how to realize
data deduplication in cloud while achieving integrity auditing. In this work,
we study the problem of integrity auditing and secure deduplication on cloud
data. Specifically, aiming at achieving both data integrity and deduplication
in cloud, we propose two secure systems, namely Sec Cloud and Sec Cloud . Sec
Cloud introduces an auditing entity with a maintenance of a Map Reduce cloud,
which helps clients generate data tags before uploading as well as audit the
integrity of data having been stored in cloud. Compared with previous work, the
computation by user in Sec Cloud greatly reduced during the file uploading and
auditing phases. Sec Cloud is designed motivated by the fact that customers
always want to encrypt their data before uploading, and enables integrity
auditing and secure deduplication on encrypted data.
IEEE 2016 Cloud Computing
ABSTRACT :Tiny computers located in end-user
premises are becoming popular as local servers for Internet of Things (IoT) and
Fog computing services. These highly distributed servers that can host and
distribute content and applications in a peer-to-peer (P2P) fashion are known
as nano data centers (nDCs). Despite the growing popularity of nano servers,
their energy consumption is not well-investigated. To study energy consumption
of nDCs, we propose and use flow-based and time-based energy consumption models
for shared and unshared network equipment, respectively. To apply and validate
these models, a set of measurements and experiments are performed to compare
energy consumption of a service provided by nDCs and centralized data centers
(DCs). A number of findings emerge from our study, including the factors in the
system design that allow nDCs to consume less energy than its centralized
counterpart. These include the type of access network attached to nano servers
and nano server’s time utilization (the ratio of the idle time to active time).
Additionally, the type of applications running on nDCs and factors such as
number of downloads, number of updates, and amount of preloaded copies of data
influence the energy cost. Our results reveal that number of hops between a
user and content has little impact in the total energy consumption compared to
the above-mentioned factors. We show that nano servers in Fog computing can
complement centralized DCs to serve certain applications, mostly IoT
applications for which the source of data is in end-user premises, and lead to
energy saving if the applications (or a part of them) are off-loadable from
centralized DCs and run on nDCs
IEEE 2016: CloudArmor:
Supporting Reputation-based Trust Management for Cloud Services
IEEE 2016 Cloud Computing
Abstract—Trust
management is one of the most challenging issues for the adoption and growth of
cloud computing. The highly dynamic, distributed, and non-transparent nature of
cloud services introduces several challenging issues such as privacy, security,
and availability. Preserving consumers’ privacy is not an easy task due to the
sensitive information involved in the interactions between consumers and the
trust management service. Protecting cloud services against their malicious
users (e.g., such users might give misleading feedback to disadvantage a
particular cloud service) is a difficult problem. Guaranteeing the availability
of the trust management service is another significant challenge because of the
dynamic nature of cloud environments. In this article, we describe the design
and implementation of CloudArmor, a reputation-based trust management framework
that provides a set of functionalities to deliver Trust as a Service (TaaS),
which includes i) a novel protocol to prove the credibility of trust feedbacks
and preserve users’ privacy, ii) an adaptive and robust credibility model for
measuring the credibility of trust feedbacks to protect cloud services from
malicious users and to compare the trustworthiness of cloud services, and iii)
an availability model to manage the availability of the decentralized
implementation of the trust management service. The feasibility and benefits of
our approach have been validated by a prototype and experimental studies using
a collection of real-world trust feedbacks on cloud services.
IEEE 2016: Secure
Optimization Computation Outsourcing in Cloud Computing:  A Case Study
of Linear Programming
IEEE 2016 Cloud Computing
Abstract—Cloud
computing enables an economically promising paradigm of computation
outsourcing. However, how to protect customers confidential data processed and
generated during the computation is becoming the major security concern.
Focusing on engineering computing and optimization tasks, this paper
investigates secure outsourcing of widely applicable linear programming (LP)
computations. Our mechanism design explicitly decomposes LP computation
outsourcing into public LP solvers running on the cloud and private LP
parameters owned by the customer. The resulting flexibility allows us to
explore appropriate security/efficiency tradeoff via higher-level abstraction
of LP computation than the general circuit representation. Specifically, by
formulating private LP problem as a set of matrices/vectors, we develop
efficient privacy-preserving problem transformation techniques, which allow
customers to transform the original LP into some random one while protecting
sensitive input/output information. To validate the computation result, we
further explore the fundamental duality theorem of LP and derive the necessary
and sufficient conditions that correct results must satisfy. Such result
verification mechanism is very efficient and incurs close-to-zero additional
cost on both cloud server and customers. Extensive security analysis and
experiment results show the immediate practicability of our mechanism design.
IEEE 2016: Ensures
Dynamic access and Secure E-Governance system in Clouds Services – EDSE
IEEE 2016 Cloud Computing
ABSTRACT : E-Governance process helps the public to learn the information and
available of data’s themselves rather than being dependent on a physical
guidance. They have been driven through e-govern experience over the past
decade; hence there is a necessity to explore new E-Governance concepts with
advanced technologies. These systems are now exposed to wide numbers of threat
while handling theinformation. This paper therefore designing an efficient
system for ensuring security and dynamic operation, so Remote Integrity and
secure dynamic operation is designed and implemented in E-Governance
environment. The data is stored in the server using dynamic data operation with
proposed method which enables the user to access the data for further usage. Here
the system does an authentication process to prevent the data loss and ensuring
security with reliability method. An efficient distributed storage auditing
mechanism is planned which overcomes the limitations in handling the data loss.
The content was made easy through the means of cloud computing by using
innovative method during information retrieval. Ensuring data security in this
service enforces error localization and easy identification of misbehaving
server. Availability, Confidentiality and integrity are the key factors of the
security. In nature the data are dynamic in cloud service; hence this process
aims to process the operation with reduced computational rate, space and time
consumption. And also ensure trust based secured access control.
IEEE 2016: On
Traffic-Aware Partition and Aggregation in MapReduce for Big Data
Applications
IEEE 2016 Cloud Computing
ABSTRACT : The MapReduce programming model simplifies large-scale data
processing on commodity cluster by exploiting parallel map tasks and reduce
tasks. Although many efforts have been made to improve the performance of
MapReduce jobs, they ignore the network traffic generated in the shuffle phase,
which plays a critical role in performance enhancement. Traditionally, a hash
function is used to partition intermediate data among reduce tasks, which,
however, is not traffic-efficient because network topology and data size
associated with each key are not taken into consideration. In this paper, we
study to reduce network traffic cost for a MapReduce job by designing a novel
intermediate data partition scheme. Furthermore, we jointly consider the
aggregator placement problem, where each aggregator can reduce merged
traffic from multiple map tasks. A decomposition-based distributed algorithm is
proposed to deal with the large-scale optimization problem for big data
application and an online algorithm is also designed to adjust data partition
and aggregation in a dynamic manner. Finally, extensive simulation results
demonstrate that our proposals can significantly reduce network traffic cost
under both offline and online cases.
IEEE 2016: A Secure
and Dynamic Multi-keyword Ranked Search Scheme over Encrypted Cloud Data
IEEE 2016 Cloud Computing
ABSTRACT :Due to the increasing popularity of cloud computing, more and
more data owners are motivated to outsource their data to cloud servers for
great convenience and reduced cost in data management. However, sensitive data
should be encrypted before outsourcing for privacy requirements, which
obsoletes data utilization like keyword-based document retrieval. In this
paper, we present a secure multi-keyword ranked search scheme over encrypted
cloud data, which simultaneously supports dynamic update operations like deletion
and insertion of documents. Specifically, the vector space model and the
widely-used TF_IDF model are combined in the index construction and query
generation. We construct a special tree-based index structure and propose a
“Greedy Depth-first Search” algorithm to provide efficient multi-keyword ranked
search. The secure kNN algorithm is utilized to encrypt the index and query
vectors, and meanwhile ensure accurate relevance score calculation between
encrypted index and query vectors. In order to resist statistical attacks,
phantom terms are added to the index vector for blinding search results . Due
to the use of our special tree-based index structure, the proposed scheme can
achieve sub-linear search time and deal with the deletion and insertion of
documents flexibly.Extensive experiments are conducted to demonstrate the
efficiency of the proposed scheme.
IEEE 2016: An
Efficient Privacy-Preserving Ranked Keyword Search Method
IEEE 2016 Cloud Computing
ABSTRACT :Cloud data owners prefer to outsource documents in an encrypted
form for the purpose of privacy preserving. Therefore it is essential to
develop efficient and reliable ciphertext search techniques. One challenge is
that the relationship between documents will be normally concealed in the
process of encryption, which will lead to significant search accuracy
performance degradation. Also the volume of data in data centers has
experienced a dramatic growth. This will make it even more challenging to
design ciphertext search schemes that can provide efficient and reliable online
information retrieval on large volume of encrypted data. In this paper, a hierarchical
clustering method is proposed to support more search semantics and also to meet
the demand for fast ciphertext search within a big data environment. The
proposed hierarchical approach clusters the documents based on the minimum
relevance threshold, and then partitions the resulting clusters into
sub-clusters until the constraint on the maximum size of cluster is reached. In
the search phase, this approach can reach a linear computational complexity
against an exponential size increase of document collection. In order to verify
the authenticity of search results, a structure called minimum hash sub-tree is
designed in this paper. Experiments have been conducted using the collection
set built from the IEEE Xplore. The results show that with a sharp increase of
documents in the dataset the search time of the proposed method increases
linearly whereas the search time of the traditional method increases
exponentially. Furthermore, the proposed method has an advantage over the
traditional method in the rank privacy and relevance of retrieved documents.
IEEE
2016: Differentially Private Online Learning for Cloud-Based Video
Recommendation with Multimedia Big Data in Social Networks
IEEE 2016
Cloud Computing
ABSTRACT : With the rapid growth in multimedia
services and the enormous offers of video contents in online social networks,users
have difficulty in obtaining their interests. Therefore, various personalized
recommendation systems have been proposed. However, they ignore that the
accelerated proliferation of social media data has led to the big data era,
which has greatly impeded the process of video recommendation. In addition,
none of them has considered both the privacy of users’ contexts (e,g., social status,
ages and hobbies) and video service vendors’ repositories, which are extremely
sensitive and of significant commercial value. To handle the problems, we
propose a cloud-assisted differentially private video recommendation system
based on distributed online learning. In our framework, service vendorsare
modeled as distributed cooperative learners, recommending videos according to
user’s context, while simultaneously adapting the video-selection strategy
based on user-click feedback to maximize total user clicks (reward).
Considering the sparsity and heterogeneity of big social media data, we also
propose a novel geometric differentially private model, which can greatly reduce
the performance (recommendation accuracy) loss. Our simulation shows the
proposed algorithms outperform other existing methods and keep a delicate
balance between computing accuracy and privacy preserving level.
IEEE
2016: Fine-Grained Two-Factor Access Control for Web-Based Cloud Computing
Services
IEEE 2016
Cloud Computing
ABSTRACT :In this paper, we introduce a new fine-grained two-factor
authentication (2FA) access control system for web-based cloud computing
services. Specifically, in our proposed 2FA access control system, an
attribute-based access control mechanism
is implemented with the necessity of both a user secret key and a lightweight
security device. As a user cannot access the system if they do not hold both,
the mechanism can enhance the security of the system, especially in those
scenarios where many users share the same computer for web-based cloud services.
In addition, attribute-based control in the system also enables the cloud
server to restrict the access to those users with the same set of attributes
while preserving user privacy, i.e., the cloud server only knows that the user
fulfills the required predicate, but has no idea on the exact identity of the
user. Finally, we also carry out a simulation to demonstrate the practicability
of our proposed 2FA system.
IEEE
2016: Dual-Server Public-Key Encryption with Keyword Search for Secure
Cloud Storage
IEEE 2016
Cloud Computing
ABSTRACT : Searchable encryption is of increasing interest for protecting the
data privacy in secure searchable cloud storage. In this work, we investigate
the security of a well-known cryptographic primitive, namely Public Key
Encryption with Keyword Search (PEKS) which is very useful in many applications
of cloud storage. Unfortunately, it has been shown that the traditional PEKS
framework suffers from an inherent insecurity called inside Keyword Guessing
Attack (KGA) launched by the malicious server. To address this security
vulnerability, we propose a new PEKS framework named Dual-Server Public Key
Encryption with Keyword Search (DS-PEKS). As another main contribution, we
define a new variant of the Smooth Projective Hash Functions (SPHFs) referred
to as linear and homomorphic SPHF (LH-SPHF). We then show a generic
construction of secure DS-PEKS from LH-SPHF. To illustrate the feasibility of our
new framework, we provide an efficient instantiation of the general framework
from a DDH-based LH-SPHF and show that it can achieve the strong security
against inside KGA.
IEEE 2016: DeyPoS:
Deduplicatable Dynamic Proof of Storage for Multi-User Environments
IEEE 2016
Cloud Computing
ABSTRACT :Dynamic
Proof of Storage (PoS) is a useful cryptographic primitive that enables a user
to check the integrity of out sourced files and to efficiently update the files
in a cloud server. Although researchers have proposed many dynamic PoS schemes
in single user environments, the problem in multi-user environments has not
been investigated sufficiently. A practical multi-user cloud storage system
needs the secure client-side cross-user deduplication technique, which allows a
user to skip the uploading process and obtain the ownership of the files
immediately, when other owners of the same files have uploaded them to the
cloud server. To the best of our knowledge, none of the existing dynamic PoSs
can support this technique. In this paper, we introduce the concept of deduplicatable
dynamic proof of storage and propose an efficient construction called DeyPoS,
to achieve dynamic PoS and secure cross-user deduplication, simultaneously.
Considering the challenges of structure diversity and private tag generation,
we exploit a novel tool called Homomorphic Authenticated Tree (HAT). We prove
the security of our construction, and the theoretical analysis and experimental
results show that our construction is efficient in practice.
ABSTRACT : Cloud computing becomes increasingly popular
for data owners to outsource their data to public cloud servers while allowing
intended data users to retrieve these data stored in cloud. This kind of
computing model brings challenges to the security and privacy of data stored in
cloud. Attribute-based encryption (ABE) technology has been used to
design fine-grained access control system, which provides one good method to
solve the security issues in cloud setting. However, the computation cost and ciphertext size in
most ABE schemes grow with the complexity of the access policy. Outsourced
ABE(OABE) with fine-grained access control system can largely reduce the
computation cost for users who want to access encrypted data stored in cloud by
outsourcing the heavy computation to cloud service provider (CSP). However, as
the amount of encrypted files stored in cloud is becoming very
huge, which will hinder efficient query processing. To deal with above
problem, we present a new cryptographic primitive called attribute-based encryption scheme
with outsourcing key-issuing and outsourcing decryption, which can implement
keyword search function (KSF-OABE). The proposed KSF-OABE scheme is proved secure
against chosen-plaintext attack (CPA). CSP performs partial decryption task
delegated by data user without knowing anything about the plaintext. Moreover, the
CSP can perform encrypted keyword search without knowing anything about the
keywords embedded in trapdoor.
IEEE
2016: KSF-OABE: Outsourced Attribute-Based Encryption with Keyword Search
Function for Cloud Storage
IEEE 2016
Cloud Computing
IEEE 2016: SecRBAC:
Secure data in the Clouds
IEEE 2016 Cloud Computing
ABSTRACT :Most current security solutions are
based on perimeter security. However, Cloud computing breaks the organization
perimeters. When data resides in the Cloud, they reside outside the
organizational bounds. This leads users to a loos of control over their data
and raises reasonable security concerns that slow down the adoption of Cloud
computing. Is the Cloud service provider accessing the data? Is it legitimately
applying the access control policy defined by the user? This paper presents a
data-centric access control solution with enriched role-based expressiveness in
which security is focused on protecting user data regardless the Cloud service
provider that holds it. Novel identity-based and proxy re-encryption techniques
are used to protect the authorization model. Data is encrypted and
authorization rules are cryptographically protected to preserve user data
against the service provider access or misbehavior. The authorization model
provides high expressiveness with role hierarchy and resource hierarchy support.
The solution takes advantage of the logic formalism provided by Semantic Web
technologies, which enables advanced rule management like semantic conflict
detection. A proof of concept implementation has been developed and a working
prototypical deployment of the proposal has been integrated within Google services.
IEEE 2015: Identity-based
Encryption with Outsourced Revocation in Cloud Computing
IEEE 2015 Cloud Computing
ABSTRACT : Identity-Based Encryption (IBE)
which simplifies the public key and certificate management at Public Key
Infrastructure (PKI) is an important alternative to public key encryption.
However, one of the main efficiency drawbacks of IBE is the overhead
computation at Private Key Generator (PKG) during user revocation. Efficient
revocation has been well studied in traditional PKI setting, but the cumbersome
management of certificates is precisely the burden that IBE strives to
alleviate. In this paper, aiming at tackling the critical issue of identity
revocation, we introduce outsourcing computation into IBE for the first time
and propose a revocable IBE scheme in the server-aided setting. Our scheme
offloads most of the key generation related operations during key-issuing and
key-update processes to a Key Update Cloud Service Provider, leaving only a
constant number of simple operations for PKG and users to perform locally. This
goal is achieved by utilizing a novel collusion-resistant technique: we employ
a hybrid private key for each user, in which an AND gate is involved to connect
and bound the identity component and the time component. Furthermore, we
propose another construction which is provable secure under the recently
formulized Refereed Delegation of Computation model. Finally, we provide
extensive experimental results to demonstrate the efficiency of our proposed
construction.
IEEE 2015: Control
Cloud Data Access Privilege and Anonymity With Fully Anonymous Attribute-Based
Encryption
IEEE 2015 Cloud Computing
Abstract
— Cloud computing is a
revolutionary computing paradigm, which enables flexible, on-demand, and
low-cost usage of computing resources, but the data is outsourced to some cloud
servers, and various privacy concerns emerge from it.Various schemes based on
the attribute-based encryption have been proposed to secure the cloud storage.
However, most work focuses on the data contents privacy and the access control,
while less attention is paid to the privilege control and the identity privacy.
In this paper, we present a semianonymous privilege control scheme AnonyControl
to address not only the data privacy, but also the user identity privacy in
existing access control schemes. AnonyControl decentralizes the central
authority to limit the identity leakage and thus achieves semianonymity. Besides,
it also generalizes the file access control to the privilege control, by which
privileges of all operations on the cloud data can be managed in a fine-grained
manner. Subsequently, we present the AnonyControl-F, which fully prevents the
identity leakage and achieve the full anonymity. Our security analysis shows
that both AnonyControl and AnonyControl-F are secure under the decisional
bilinear Diffie–Hellman assumption, and our performance evaluation exhibits the
feasibility of our schemes.
IEEE 2015: DROPS:
Division and Replication of Data in Cloud for Optimal Performance and Security
IEEE 2015 Cloud Computing
Abstract
—Outsourcing data to a
third-party administrative control, as is done in cloud computing, gives rise
to security concerns. The data compromise may occur due to attacks by other
users and nodes within the cloud. Therefore, high security measures are required
to protect data within the cloud. However, the employed security strategy must
also take into account the optimization of the data retrieval time. In this
paper, we propose Division and Replication of Data in the Cloud for Optimal
Performance and Security (DROPS) that collectively approaches the security and
performance issues. In the DROPS methodology, we divide a file into fragments,
and replicate the fragmented data over the cloud nodes. Each of the nodes
stores only a single fragment of a particular data file that ensures that even
in case of a successful attack, no meaningful information is revealed to the attacker.
Moreover, the nodes storing the fragments, are separated with certain distance
by means of graph T-coloring to prohibit an attacker of guessing the locations
of the fragments. Furthermore, the DROPS methodology does not rely on the
traditional cryptographic techniques for the data security; thereby relieving
the system of computationally expensive methodologies. We show that the
probability to locate and compromise all of the nodes storing the fragments of
a single file is extremely low. We also compare the performance of the DROPS
methodology with ten other schemes. The higher level of security with slight performance
overhead was observed.
IEEE 2015: An
Efficient Green Control Algorithm in Cloud Computing for Cost Optimization
IEEE 2015 Cloud Computing
Abstract
—Cloud computing is a
new paradigm for delivering remote computing resources through a network.
However, achieving an energy-efficiency control and simultaneously satisfying a
performance guarantee have become critical issues for cloud providers. In this
paper, three power-saving policies are implemented in cloud systems to mitigate
server idle power. The challenges of controlling service rates and applying the
N-policy to optimize operational cost within a performance guarantee are first
studied. A cost function has been developed in which the costs of power
consumption, system congestion and server startup are all taken into
consideration. The effect of energy-efficiency controls on response times,
operating modes and incurred costs are all demonstrated. Our objectives are to
find the optimal service rate and mode-switching restriction, so as to minimize
cost within a response time guarantee under varying arrival rates. An efficient green control (EGC) algorithm
is first proposed for solving constrained optimization problems and making
costs/performances tradeoffs in systems with different power-saving policies.
Simulation results show that the benefits of reducing operational costs and
improving response times can be verified by applying the power-saving policies
combined with the proposed algorithm as compared to a typical system under a
same performance guarantee.
IEEE 2015: Revisiting
Attribute-Based Encryption With Verifiable Outsourced Decryption
IEEE 2015 Cloud Computing
Abstract
— Attribute-based
encryption (ABE) is a promising technique for fine-grained access control of
encrypted data in a cloud storage, however, decryption involved in the ABEs is usually
too expensive for resource-constrained front-end users, which greatly hinders
its practical popularity. In order to reduce the decryption overhead for a user
to recover the plaintext, Green et al. suggested to outsource the majority of
the decryption work without revealing actually data or private keys. To ensure the
third-party service honestly computes the outsourced work, Lai et al. provided
a requirement of verifiability to the decryption of ABE, but their scheme
doubled the size of the underlying ABE ciphertext and the computation costs.
Roughly speaking, their main idea is to use a parallel encryption technique,
while one of the encryption components is used for the verification purpose. Hence,
the bandwidth and the computation cost are doubled. In this paper, we
investigate the same problem. In particular, we propose a more efficient and
generic construction of ABE with verifiable outsourced decryption based on an
attribute based key encapsulation mechanism, a symmetric-key encryption scheme
and a commitment scheme. Then, we prove the security and the verification
soundness of our constructed ABE scheme in the standard model. Finally, we
instantiate our scheme with concrete building blocks. Compared with Lai et al.’s
scheme, our scheme reduces the bandwidth and the computation costs almost by
half.
IEEE 2015: I-sieve: An inline
high performance deduplication system used in cloud storage
Abstract: Data deduplication is an emerging
and widely employed method for current storage systems. As this technology is
gradually applied in inline scenarios such as with virtual machines and cloud
storage systems, this study proposes a novel deduplication architecture called
I-sieve. The goal of I-sieve is to realize a high performance data sieve system
based on iSCSI in the cloud storage system. We also design the corresponding
index and mapping tables and present a multi-level cache using a solid state
drive to reduce RAM consumption and to optimize lookup performance. A prototype
of I-sieve is implemented based on the open source iSCSI target, and many
experiments have been conducted driven by virtual machine images and testing
tools. The evaluation results show excellent deduplication and foreground
performance. More importantly, I-sieve can co-exist with the existing deduplication
systems as long as they support the iSCSI protocol.
IEEE 2015: A Hybrid Cloud
Approach for Secure Authorized Deduplication
Abstract: Data deduplication is one of
important data compression techniques for eliminating duplicate copies of
repeating data, and has been widely used in cloud storage to reduce the amount
of storage space and save bandwidth. To protect the confidentiality of
sensitive data while supporting deduplication, the convergent encryption
technique has been proposed to encrypt the data before outsourcing. To better
protect data security, this paper makes the first attempt to formally address
the problem of authorized data deduplication. Different from traditional
deduplication systems, the differential privileges of users are further
considered in duplicate check besides the data itself. We also present several
new deduplication constructions supporting authorized duplicate check in a
hybrid cloud architecture. Security analysisdemonstrates that our scheme is
secure in terms of the definitions specified in the proposed security model. As
a proof of concept, we implement a prototype of our proposed authorized
duplicate check scheme and conduct test bed experiments using our prototype. We
show that our proposed authorized duplicate check scheme incurs minimal overhead
compared to normal operations.
IEEE 2015: Cost-Minimizing
Dynamic Migration of Content Distribution Services into Hybrid Clouds
Abstract: With the recent advent of cloud
computing technologies, a growing number of content distribution applications
are contemplating a switch to cloud-based services, for better scalability and
lower cost. Two key tasks are involved for such a move: to migrate the contents
to cloud storage, and to distribute the Web service load to cloud-based Web
services. The main issue is to best utilize the cloud as well as the
application provider's existing private cloud, to serve volatile requests with
service response time guarantee at all times, while incurring the minimum
operational cost. While it may not be too difficult to design a simple
heuristic, proposing one with guaranteed cost optimality over a long run of the
system constitutes an intimidating challenge. Employing Lyapunov optimization
techniques, we design a dynamic control algorithm to optimally place contents and
dispatch requests in a hybrid cloud infrastructure spanning geo-distributed
data centers, which minimizes overall operational cost overtime, subject to
service response time constraints. Rigorous analysis shows that the algorithm
nicely bounds the response times within the preset QoS target, and guarantees
that the overall cost is within a small constant gap from the optimum achieved
by a T-slot look ahead mechanism with known future information. We verify the
performance of our dynamic algorithm with prototype-based evaluation.
IEEE 2015: Cloud Sky: A
Controllable Data Self-Destruction System for Un trusted Cloud Storage Networks
Abstract: In cloud services, users may
frequently be required to reveal their personal private information which could
be stored in the cloud to used by different parts for different purposes.
However, in a cloud-wide storage network, the servers are easily under strong
attacks and also commonly experience software/hardware faults. As such, the
private information could be under great risk in such an un trusted
environment. Given that the presented personal sensitive information is usually
out of user's control in most cloud-based services, ensuring data security and
privacy protection with respect to un trusted storage network has become a
formidable challenge in research. To address these challenges, in this paper we
propose a self-destruction system, named Cloud Sky, which is able to enforce
the security of user privacy over the un trusted cloud in a controllable way.
Cloud Sky exploits a key control mechanism based on the attribute-based
encryption (ABE) and takes advantage of active storage networks to allow the
user to control the subjective life-cycle and the access control polices of the
private data whose integrity is ensured by using HMAC to cope with un trusted
environments and there by adapting it to the cloud in terms of both performance
and security requirements. The feasibility of the system in terms of its
performance and scalability is demonstrated by experiments on a real
large-scale storage network
IEEE 2015: Energy-aware Load
Balancing and Application Scaling for the Cloud Ecosystem
Abstract: In this paper we introduce an
energy-aware operation model used for load balancing and application scaling on
a cloud. The basic philosophy of our approach is dening an energy-optimal
operation regime and attempting to maximize the number of servers operating in
this regime. Idle and lightly-loaded servers are switched to one of the sleep
states to save energy. The load balancing and scaling algorithms also exploit
some of the most desirable features of server consolidation mechanisms discussed
in the literature.
IEEE 2015: SecDep: A user-aware
efficient fine-grained secure deduplication scheme with multi-level key
management
Abstract: Nowadays, many customers and enterprises backup their data
to cloud storage that performs deduplication to save storage space and network
bandwidth. Hence, how to perform secure deduplication becomes a critical
challenge for cloud storage. According to our analysis, the state-of-the-art
secure deduplication methods are not suitable for cross-user fine grained data
deduplication. They either suffer brute-force attacks that can recover files
falling into a known set, or incur large computation (time) overheads.
Moreover, existing approaches of convergent key management incur large space
overheads because of the huge number of chunks shared among users. Our
observation that cross-user redundant data are mainly from the duplicate files,
motivates us to propose an efficient secure deduplication scheme SecDep. SecDep
employs User-Aware Convergent Encryption (UACE) and Multi-Level Key management
(MLK) approaches. (1) UACE combines cross-user file-level and inside-user
chunk-level deduplication, and exploits different secure policies among and
inside users to minimize the computation overheads. Specifically, both of
file-level and chunk-level deduplication use variants of Convergent Encryption
(CE) to resist brute-force attacks. The major difference is that the file-level
CE keys are generated by using a server-aided method to ensure security of
cross-user deduplication, while the chunk-level keys are generated by using a
user-aided method with lower computation overheads. (2) To reduce key space
overheads, MLK uses file-level key to encrypt chunk-level keys so that the key
space will not increase with the number of sharing users. Furthermore, MLK
splits the file-level keys into share-level keys and distributes them to
multiple key servers to ensure security and reliability of file-level keys. Our
security analysis demonstrates that SecDep ensures data confidentiality and key
security. Our experiment results based on several large real-world datasets
show that SecDep is more time efficient and key-space-efficient than the
state-of-the-art secure deduplication approaches.
IEEE 2015:A
Secure Client Side De duplication Scheme in Cloud Storage Environments
Abstract—recent years have witnessed the trend of leveraging cloud-based services for large scale content storage, processing, and distribution. Security and privacy are among top concerns for the public cloud environments. Towards these security challenges, we propose and implement, on Open Stack Swift, a new client-side deduplication scheme for securely storing and sharing outsourced data via the public cloud. The originality of our proposal is twofold. First, it ensures better confidentiality towards unauthorized users. That is, every client computes a per data key to encrypt the data that he intends to store in the cloud. As such, the data access is managed by the data owner. Second, by integrating access rights in metadata file, an authorized user can decipher an encrypted file only with his private key.
IEEE 2015 :Adaptive
Algorithm for Minimizing Cloud Task Length with Prediction Errors
Abstract—compared
to traditional distributed computing like Grid system, it is non-trivial to
optimize cloud task’s execution Performance due to its more constraints like
user payment budget and divisible resource demand. In this paper, we analyze
in-depth our proposed optimal algorithm minimizing task execution length with
divisible resources and payment budget: (1) We derive the upper bound of cloud
task length, by taking into account both workload prediction errors and host
load prediction errors. With such state-of the-art bounds, the worst-case task
execution performance is predictable, which can improve the Quality of Service
in turn. (2) We design a dynamic version for the algorithm to adapt to the load
dynamics over task execution progress, further improving the resource utilization.
(3)We rigorously build a cloud prototype over a real cluster environment with
56 virtual machines, and evaluate our algorithm with different levels of
resource contention. Cloud users in our cloud system are able to compose
various tasks based on off-the-shelf web services. Experiments show that task
execution lengths under our algorithm are always close to their theoretical
optimal values, even in a competitive situation with limited available
resources. We also observe a high level of fair treatment on the resource
allocation among all tasks.
An
Efficient Information Retrieval Approach for Collaborative Cloud Computing
Abstract—the collaborative cloud computing (CCC) which is collaboratively supported by various organizations (Google, IBM, AMAZON, MICROSOFT) offers a promising future for information retrieval. Human beings tend to keep things simple by moving the complex aspects to computing. As a consequence, we prefer to go to one or a limited number of sources for all our information needs. In contemporary scenario where information is replicated, modified (value added), and scattered geographically; retrieving information in a suitable form requires lot more effort from the user and thus difficult. For instance, we would like to go directly to the source of information and at the same time not to be burdened with additional effort. This is where, we can make use of learning systems (Neural Network based) that can intelligently decide and retrieve the information that we need by going directly to the source of information. This also, reduces single point of failure and eliminates bottlenecks in the path of information flow, Reduces the Time delay and it provide remarkable ability to overcome from traffic conjection complicated patterns. It makes Efficient information retrieval approach for collaborative cloud computing. both secure and verifiable, without relying on random oracles. Finally, we show an implementation of our
Building
Confidential and Efficient Query Services in the Cloud with RASP Data
Perturbation
Abstract—With
the wide deployment of public cloud computing infrastructures, using clouds to
host data query services has become an appealing solution for the advantages on
scalability and cost-saving. However, some data might be sensitive that the data
owner does not want to move to the cloud unless the data confidentiality and
query privacy are guaranteed. On the other hand, a secured query service should
still provide efficient query processing and significantly reduce the in-house
workload to fully realize the benefits of cloud computing. We propose the RASP
data perturbation method to provide secure and efficient range query and kNN
query services for protected data in the cloud. The RASP data perturbation
method combines order preserving encryption, dimensionality expansion, random
noise injection, and random projection, to provide strong resilience to attacks
on the perturbed data and queries. It also preserves multidimensional ranges,
which allows existing indexing techniques to be applied to speedup range query
processing. The kNN-R algorithm is designed to work with the RASP range query
algorithm to process the kNN queries. We have carefully analyzed the attacks on
data and queries under a precisely defined threat model and realistic security
assumptions. Extensive experiments have been conducted to show the advantages
of this approach on efficiency and security.
Compatibility-aware
Cloud Service Composition under Fuzzy Preferences of Users
Abstract—When
a single Cloud service (i.e., a software image and a virtual machine), on its
own, cannot satisfy all the user requirements, a composition of Cloud services
is required. Cloud service composition, which includes several tasks such as discovery,
compatibility checking, selection, and deployment, is a complex process and
users find it difficult to select the best one among the hundreds, if not
thousands, of possible compositions available. Service composition in Cloud
raises even new challenges caused by diversity of users with different
expertise requiring their applications to be deployed across difference geographical
locations with distinct legal constraints. The main difficulty lies in
selecting a combination of virtual appliances (software images) and
infrastructure services that are compatible and satisfy a user with vague
preferences. Therefore, we Present
a framework and algorithms which simplify Cloud service composition for
unskilled users. We develop an ontology based approach to analyze Cloud service
compatibility by applying reasoning on the expert knowledge. In addition, to
minimize effort of users in expressing their preferences, we apply combination
of evolutionary algorithms and fuzzy logic for composition optimization. This
lets users express their needs in linguistics terms which brings a great
comfort to them compared to systems that force users to assign exact weights
for all preferences.
Consistency
as a Service: Auditing Cloud Consistency
Abstract—Cloud storage services have become commercially popular due to their overwhelming advantages. To provide ubiquitous always-on access, a cloud service provider (CSP) maintains multiple replicas for each piece of data on geographically distributed servers. A key problem of using the replication technique in clouds is that it is very expensive to achieve strong consistency on a worldwide scale. In this paper, we first present a novel consistency as a service (CaaS) model, which consists of a large data cloud and multiple small audit clouds. In the CaaS model, a data cloud is maintained by a CSP, and a group of users that constitute an audit cloud can verify whether the data cloud provides the promised level of consistency or not. We propose a two-level auditing architecture, which only requires a loosely synchronized clock in the audit cloud. Then, we design Algorithms to quantify the severity of violations with two metrics: the commonality of violations, and the staleness of the value of a read. Finally, we devise a heuristic auditing strategy (HAS) to reveal as many violations as possible. Extensive experiments were performed using a combination of simulations and real cloud deployments to validate HAVE.
Data
Similarity-Aware Computation Infrastructure for the Cloud
Abstract—the cloud is emerging for scalable and efficient cloud
services. To meet the needs of handling massive data and decreasing data
migration, the computation infrastructure requires efficient data placement and
proper management for cached data. In this paper, we propose an efficient and
cost-effective multilevel caching scheme, called MERCURY, as computation
infrastructure of the cloud. The idea behind MERCURY is to explore and exploit
data similarity and support efficient data placement. To accurately and efficiently
capture the data similarity, we leverage a low-complexity locality-sensitive
hashing (LSH). In our design, in addition to the problem of space inefficiency,
we identify that a conventional LSH scheme also suffers from the problem of
homogeneous data placement. To address these two problems, we design a novel
multi core-enabled locality-sensitive hashing (MC-LSH) that accurately captures
the differentiated similarity across data. The similarity-aware MERCURY, hence,
partitions data into the L1 cache, L2 cache, and main memory based on their
distinct localities, which help optimize cache utilization and minimize the
pollution in the last-level cache. Besides extensive evaluation through
simulations, we also implemented MERCURY in a system. Experimental results
based
On real-world applications and data sets demonstrate the
efficiency and efficacy of our proposed schemes.
Maximizing
Revenue with Dynamic Cloud Pricing: The Infinite Horizon Case
Abstract—we study the infinite horizon dynamic pricing problem for
an infrastructure cloud provider in the emerging cloud computing paradigm. The
cloud provider, such as Amazon, provides computing capacity in the form of
virtual instances and charges customers a time-varying price for the period
they
use the instances. The provider’s problem is then to find an optimal
pricing policy, in face of stochastic demand arrivals and departures, so that
the average expected revenue is maximized in the long run. We adopt a revenue
management framework to tackle the problem. Optimality conditions and
structural results are obtained for our stochastic formulation, which yield
insights on the optimal pricing strategy. Numerical results verify our analysis
and reveal additional properties of optimal pricing policies for the
Infinite horizon case.
Enabling
Data Integrity Protection in Regenerating-Coding-Based Cloud Storage
Abstract—to protect outsourced data in cloud storage against corruptions, enabling integrity protection, fault tolerance, and efficient recovery for cloud storage becomes critical. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than traditional erasure codes during failure recovery. Therefore, we study the problem of remotely checking the integrity of regenerating-coded data against corruptions under a real-life cloud storage setting. We
Design and implement a practical data integrity protection (DIP) scheme
for a specific regenerating code, while preserving the intrinsic properties of
fault tolerance and repair traffic saving. Our DIP scheme is designed under a
Byzantine adversarial model, and enables a client to feasibly verify the
integrity of random subsets of outsourced data against general or malicious corruptions.
It works under the simple assumption of thin-cloud storage and allows different
parameters to be fine-tuned for the performance-security trade-off. We
implement and evaluate the overhead of our DIP scheme in a real cloud storage
test bed under different parameter choices. We demonstrate that remote integrity
checking can be feasibly integrated into regenerating codes in practical
deployment.
Abstract—Data sharing is an important functionality in cloud
storage. In this article, we show how to securely, efficiently, and flexibly
share data with others in cloud storage. We describe new public-key cryptosystems
which produce constant-size cipher texts such that efficient delegation of
decryption rights for any set of cipher texts are possible. The novelty is that
one can aggregate any set of secret keys and make them as compact as a single
key, but encompassing the power of all the keys being aggregated. In other
words, the secret key holder can release a constant-size aggregate key for
flexible choices of cipher text set in cloud storage, but the other encrypted
files outside the set remain confidential. This compact aggregate key can be
conveniently sent to others or be stored in a smart card with very limited
secure storage. We provide formal security analysis of our schemes in the
standard model. We also describe other application of our schemes. In particular,
our schemes give the first public-key patient-controlled encryption for
flexible hierarchy, which was yet to be known.
Low-Carbon
Routing Algorithms for Cloud Computing Services in IP-over-WDM Networks
Abstract—Energy consumption in telecommunication networks keeps
growing rapidly, mainly due to emergence of new Cloud Computing (CC) services
that need to be supported by large data centers that consume a huge amount of
energy and, in turn, cause the emission of enormous quantity of CO2. Given the
decreasing availability of fossil fuels and the raising concern about global warming,
research is now focusing on novel “low-carbon” telecom solutions. E.g., based
on today telecom technologies, data centers can be located near renewable
energy plants and data can then be effectively transferred to these locations
via reconfigurable optical networks, based on the principle that data can be
moved more efficiently than electricity. This paper focuses on how to dynamically
route on-demand optical circuits that are established to transfer
energy-intensive data processing towards data centers powered with renewable
energy. Our main contribution consists in devising two routing algorithms for
connections supporting CC services, aimed at minimizing the CO2 emissions of
data centers by following the current availability of renewable energy (Sun and
Wind). The trade-off with energy consumption for the transport equipments is
also considered. The results show that relevant reductions, up to about 30% in
CO2 emissions can be achieved using our approaches compared to baseline
shortest path- based routing strategies, paying off only a marginal increase in
terms of network blocking probability
NCCloud:
A Network-Coding-Based Storage System in a Cloud-of-Clouds
Abstract—to provide fault tolerance for cloud storage, recent
studies propose to stripe data across multiple cloud vendors. However, if a
cloud suffers from a permanent failure and loses all its data, we need to
repair the lost data with the help of the other surviving clouds to preserve
data redundancy. We present a proxy-based storage system for fault-tolerant
multiple-cloud storage called NCCloud, which achieves cost-effective repair for
a permanent single-cloud failure. NCCloud is built on top of a
network-coding-based storage scheme called the functional minimum-storage
regenerating (FMSR) codes, which maintain the same fault tolerance and data
redundancy as in traditional erasure codes (e.g., RAID-6), but use less repair
traffic and hence incur less monetary cost due to data transfer. One key design
feature of our FMSR codes is that we relax the encoding requirement of storage
nodes during repair, while preserving the benefits of network coding in repair.
We implement a proof-of-concept prototype of NCCloud and deploy it atop both
local and commercial clouds. We validate that FMSR codes provide significant
monetary cost savings in repair over RAID-6 codes, while having comparable
response time performance in normal cloud storage operations such as upload/download.
Integrity
Verification in Multi-Cloud Storage Using Cooperative Provable Data Possession
Abstract- Storage outsourcing in cloud computing is a rising trend
which prompts a number of interesting security issues. Provable data possession
(PDP) is a method for ensuring the integrity of data in storage outsourcing.
This research addresses the construction of efficient PDP which called as Cooperative
PDP (CPDP) mechanism for distributed cloud storage to support data migration
and scalability of service, which considers the existence of multiple cloud
service providers to collaboratively store and maintain the clients’ data. Cooperative
PDP (CPDP) mechanism is based on homomorphic verifiable response, hash index
hierarchy for dynamic scalability, cryptographic encryption for security.
Moreover, it proves the security of scheme based on multi-prover zero knowledge
proof system, which can satisfy knowledge soundness, completeness, and
zero-knowledge properties. This research introduces lower computation and
communication overheads in comparison with non-cooperative approaches.
Optimal
Power Allocation and Load Distribution for Multiple Heterogeneous Multi core
Server Processors across Clouds and Data Centers
Abstract—for multiple heterogeneous multi core server processors
across clouds and data centers, the aggregated performance of the cloud of
clouds can be optimized by load distribution and balancing. Energy efficiency
is one of the most important issues for large scale server systems in current
and future data centers. The multi core processor technology provides new
levels of performance and energy efficiency. The present paper aims to develop
power and performance constrained load distribution methods for cloud computing
in current and future large-scale data centers. In particular, we address the
problem of optimal power allocation and load distribution for multiple
heterogeneous multi core server processors across clouds and data centers. Our
strategy is to formulate optimal power allocation and load distribution for
multiple servers in a cloud of clouds as optimization problems, i.e., power
constrained performance optimization and performance constrained power
optimization. Our research problems in large-scale data centers are well-defined
multivariable optimization problems, which explore the power-performance
tradeoff by fixing one factor and minimizing the other, from the perspective of
optimal load distribution. It is clear that such power and performance optimization
is important for a cloud computing provider to efficiently utilize all the
available resources. We model a multi core server processor as a queuing system
with multiple servers. Our optimization problems are solved for two different
models of core speed, where one model assumes that a core runs at zero speed
when it is idle, and the other model assumes that a core runs at a constant
speed. Our results in this paper provide new theoretical insights into power
management and performance optimization in data centers.
Oruta:
Privacy-Preserving Public Auditing for Shared Data in the Cloud
Abstract—with cloud storage services, it is commonplace for data
to be not only stored in the cloud, but also shared across multiple users.
However, public auditing for such shared data — while preserving identity
privacy — remains to be an open challenge. In this paper, we propose the first
privacy-preserving mechanism that allows public auditing on shared data stored
in the cloud. In particular, we exploit ring signatures to compute the
verification information needed to audit the integrity of shared data. With our
mechanism, the identity of the signer on each block in shared data is kept
private from a third party auditor (TPA), who is still able to verify the integrity
of shared data without retrieving the entire file. Our experimental results demonstrate
the effectiveness and efficiency of our proposed mechanism when auditing shared
data.
Towards
Differential Query Services in Cost-Efficient Clouds
Abstract—Cloud computing as an emerging technology trend is
expected to reshape the advances in information technology. In a cost efficient
cloud environment, a user can tolerate a certain degree of delay while
retrieving information from the cloud to reduce costs. In this paper, we
address two fundamental issues in such an environment: privacy and efficiency.
We first review a private keyword-based file retrieval scheme that was
originally proposed by Ostrovsky. Their scheme allows a user to retrieve files
of interest from an un trusted server without leaking any information. The main
drawback is that it will cause a heavy querying overhead incurred on the cloud,
and thus goes against the original intention of cost efficiency. In this paper,
we present a scheme, termed efficient information retrieval for ranked query
(EIRQ), based on an aggregation and distribution layer (ADL), to reduce
querying overhead incurred on the cloud. In EIRQ, queries are classified into
multiple ranks, where a higher ranked query can retrieve a higher percentage of
matched files. A user can retrieve files on demand by choosing queries of
different ranks. This feature is useful when there are a large number of
matched files, but the user only needs a small subset of them. Under different
parameter settings, extensive evaluations have been conducted on both
analytical models and on a real cloud environment, in order to examine the
effectiveness of our schemes.
Scalable
Distributed Service Integrity Attestation for Software-as-a-Service Clouds
Abstract—Software-as-a-Service
(SaaS) cloud systems enable application service providers to deliver their
applications via massive cloud computing infrastructures. However, due to their
sharing nature, SaaS clouds are vulnerable to malicious attacks. In this paper,
we present IntTest, a scalable and effective service integrity attestation
framework for SaaS clouds. Int Test provides a novel integrated attestation
graph analysis scheme that can provide stronger attacker pinpointing power than
previous schemes. Moreover, IntTest can automatically enhance result quality by
replacing bad results produced by malicious attackers with good results
produced by benign service providers. We have implemented a prototype of the
IntTest system and tested it on a production cloud computing infrastructure
using IBM System S stream processing applications. Our experimental results
show that IntTest can achieve higher attacker pinpointing accuracy than
existing approaches. IntTest does not require any special hardware or secure
kernel support and imposes little performance impact to the application, which
makes it practical for large scale cloud systems.
QoS-Aware
Data Replication for Data-Intensive Applications in Cloud Computing Systems
Abstract—Cloud computing provides scalable computing and storage resources. More and more data-intensive applications are developed in this computing environment. Different applications have different quality-of-service (QoS) requirements. To continuously support the QoS requirement of an application after data corruption, we propose two QoS-aware data replication (QADR) algorithms in cloud computing systems. The first algorithm adopts the intuitive idea of high-QoS first-replication (HQFR) to perform data replication. However, this greedy algorithm cannot minimize the data replication cost and the number of QoS-violated data replicas. To achieve these two minimum objectives, the second algorithm transforms the QADR problem into the well-known minimum-cost maximum-flow (MCMF) problem. By applying the existing MCMF algorithm to solve the QADR problem, the second algorithm can produce the optimal solution to the QADR problem in polynomial time, but it takes more computational time than the first algorithm. Moreover, it is known that a cloud computing system usually has a large number of nodes. We also propose node combination techniques to reduce the possibly large data replication time. Finally, simulation experiments are performed to demonstrate the effectiveness of the proposed algorithms in the data replication and recovery.
Public
Auditing for Shared Data with Efficient User Revocation in the Cloud
Abstract—with data services in the cloud, users can easily modify
and share data as a group. To ensure data integrity can be audited publicly,
users need to compute signatures on all the blocks in shared data. Different
blocks are signed by different users due to data modifications performed by
different users. For security reasons, once a user is revoked from the group,
the blocks, which were previously signed by this revoked user, must be
re-signed by an existing user. The straightforward method, which allows an
existing user to download the corresponding part of shared data and re-sign it
during user revocation, is
Inefficient due to the large size of shared data in the cloud. In this
paper, we propose a novel public auditing mechanism for the integrity of shared
data with efficient user revocation in mind. By utilizing proxy re-signatures,
we allow the cloud to re-sign blocks on behalf of existing users during user
revocation, so that existing users do not need to download and re-sign blocks
by themselves. In addition, a public verifier is always able to audit the
integrity of shared data without retrieving the entire data from the cloud, even
if some part of shared data has been re-signed by the cloud. Experimental
results show that our mechanism can significantly improve the efficiency of
user revocation.
No comments:
Post a Comment