IEEE 2021:A Flexible Access Control with User Revocation in Fog-Enabled Cloud Computing Abstract - The major challenging task in the fog-enabled cloud computing paradigm is to ensure the security for accessing the data through cloud and fog nodes. To solve this challenge, a Flexible Access Control using Elliptic Curve Cryptography (FAC-ECC) protocol has been developed in which the user data are encrypted by multiple asymmetric keys. Such keys are handled by both users and fog nodes. Also, data access is controlled by encrypting the data through the user. However, the main problem is to guarantee the privacy and security of resources after processing of User Revocation (UR) by data owners. The issue of UR is needed to consider for satisfying the dynamic change of user access in different applications like healthcare systems, e-commerce, etc. Therefore in this article, a FAC-UR-ECC protocol is proposed to control the data access and realize the UR in fog-enabled cloud systems. In this protocol, a revocable key aggregatebased cryptosystem is applied in the fog-cloud paradigm. It is an extension of the key-aggregate cryptosystem such that a user is revoked if his/her credential is expired. First, the subset-cover model is combined into FAC-ECC protocol to design an efficient revocable key-aggregate encryption depending on multi-linear maps which realizes the user’s access control and revocation. It can simplify the user’s key management efficiently and delegate various clients with decryption permission. Also, it can accomplish revocation of user access privileges and the FAC efficiently. By using this protocol, both the user’s secret key and the ciphertext are preserved in a fixed size. The security of accessing the data is highly enhanced by updating the ciphertext through the data owners successfully. At last, the experimental results exhibit the efficiency of FAC-UR-ECC compared to the FAC-ECC protocol.
IEEE 2021:Identity-Based Privacy Preserving Remote Data Integrity Checking for Cloud Storage Abstract: Although cloud storage service enables people easily maintain and manage amounts of data with lower cost, it cannot ensure the integrity of people’s data. In order to audit the correctness of the data without downloading them, many remote data integrity checking (RDIC) schemes have been presented. Most existing schemes ignore the important issue of data privacy preserving and suffer from complicated certificate management derived from public key infrastructure. To overcome these shortcomings, this article proposes a new Identity-based RDIC scheme that makes use of homomorphic verifiable tag to decrease the system complexity. The original data in proof are masked by random integer addition, which protects the verifier from obtaining any knowledge about the data during the integrity checking process. Our scheme is proved secure under the assumption of computational Diffie–Hellman problem. Experiment result exhibits that our scheme is very efficient and feasible for real-life applications.
IEEE 2021:Privacy-Preserving Data Encryption Strategy for Big Data in Mobile Cloud Computing Abstract: Privacy has become a considerable issue when the applications of big data are dramatically growing in cloud computing. The benefits of the implementation for these emerging technologies have improved or changed service models and improve application performances in various perspectives. However, the remarkably growing volume of data sizes has also resulted in many challenges in practice. The execution time of the data encryption is one of the serious issues during the data processing and transmissions. Many current applications abandon data encryptions in order to reach an adoptive performance level companioning with privacy concerns. In this paper, we concentrate on privacy and propose a novel data encryption approach, which is called Dynamic Data Encryption Strategy (D2ES). Our proposed approach aims to selectively encrypt data and use privacy classification methods under timing constraints. This approach is designed to maximize the privacy protection scope by using a selective encryption strategy within the required execution time requirements. The performance of D2ES has been evaluated in our experiments, which provides the proof of the privacy enhancement.
IEEE 2021:CLASS: Cloud Log Assuring Soundness and Secrecy Scheme for Cloud Forensics Abstract: User activity logs can be a valuable source of information in cloud forensic investigations; hence, ensuring the reliability and integrity of such logs is crucial. Most existing solutions for secure logging are designed for conventional systems rather than the complexity of a cloud environment. In this paper, we propose the Cloud Log Assuring Soundness and Secrecy (CLASS) process as an alternative scheme for the securing of logs in a cloud environment. In CLASS, logs are encrypted using the individual user’s public key so that only the user is able to decrypt the content. In order to prevent unauthorized modification of the log, we generate proof of past log (PPL) using Rabin’s fingerprint and Bloom filter. Such an approach reduces verification time significantly. Findings from our experiments deploying CLASS in OpenStack demonstrate the utility of CLASS in a real-world context.
IEEE 2021: Crypt -DAC : Cryptographically Enforced Dynamic Access Control in the Cloud Abstract: Enabling cryptographically enforced access controls for data hosted in untrusted cloud is attractive for many users and organizations. However, designing efficient cryptographically enforced dynamic access control system in the cloud is still challenging. In this paper, we propose Crypt-DAC, a system that provides practical cryptographic enforcement of dynamic access control. Crypt-DAC revokes access permissions by delegating the cloud to update encrypted data. In Crypt-DAC, a file is encrypted by a symmetric key list which records a file key and a sequence of revocation keys. In each revocation, a dedicated administrator uploads a new revocation key to the cloud and requests it to encrypt the file with a new layer of encryption and update the encrypted key list accordingly. Crypt-DAC proposes three key techniques to constrain the size of key list and encryption layers. As a result, Crypt-DAC enforces dynamic access control that provides efficiency, as it does not require expensive decryption/reencryption and uploading/re-uploading of large data at the administrator side, and security, as it immediately revokes access permissions. We use formalization framework and system implementation to demonstrate the security and efficiency of our construction.
Abstract:Cloud computing has become prevalent due to its nature of massive storage and vast computing capabilities. Ensuring a secure data sharing is critical to cloud applications. Recently, a number of identity-based broadcast proxy re-encryption (IB-BPRE) schemes have been proposed to resolve the problem. However, the IB-BPRE requires a cloud user (Alice) who wants to share data with a bunch of other users (e.g. colleagues) to participate the group shared key renewal process because Alice’s private key is a prerequisite for shared key generation. This, however, does not leverage the benefit of cloud computing and causes the inconvenience for cloud users. Therefore, a novel security notion named revocable identity-based broadcast proxy re-encryption (RIB-BPRE) is presented to address the issue of key revocation in this work. In a RIB-BPRE scheme, a proxy can revoke a set of delegates, designated by the delegator, from the re-encryption key. The performance evaluation reveals that the proposed scheme is efficient and practical.
IEEE 2021:Towards Deadline Guaranteed Cloud Storage Services Abstract: More and more organizations move their data and workload to commercial cloud storage systems. However,the multiplexing and sharing of the resources in a cloud storage system present unpredictable data access latency to tenants, which may make online data-intensive applications unable to satisfy their deadline requirements. Thus, it is important for cloud storage systems to provide deadline guaranteed services. In this paper, to meet a current form of service level objective (SLO) that constrains the percentage of each tenant’s data access requests failing to meet its required deadline below a given threshold, we build a mathematical model to derive the upper bound of acceptable request arrival rate on each server. We then propose a Deadline Guaranteed storage service (called DGCloud) that incorporates three basic algorithms. Its deadline-aware load balancing scheme redirects requests and creates replicas to release the excess load of each server beyond the derived upper bound. Its workload consolidation algorithm tries to maximally reduce servers while still satisfying the SLO to maximize the resource utilization. Its data placement optimization algorithm re-schedules the data placement to minimize the transmission cost of data replication. We further propose three enhancement methods to further improve the performance of DGCloud. A dynamic load balancing method allows an overloaded server to quickly offload its excess workload. A data request queue improvement method sets different priorities to the data responses in a server’s queue so that more requests can satisfy the SLO requirement. A wakeup server selection method selects a sleeping server that stores more popular data to wake up, which allows it to handle more data requests. Our trace-driven experiments in simulation and Amazon EC2 show the superior performance of DGCloud compared with previous methods in terms of deadline guarantees and system resource utilization, and the effectiveness of its individual algorithms.
Abstract: With the rapid
development of cloud computing services, more and more individuals and
enterprises prefer to outsource their data or computing to clouds. In order to
preserve data privacy, the data should be encrypted before outsourcing and it
is a challenge to perform searches over encrypted data. In this paper, we
propose a privacy-preserving multi-keyword ranked search scheme over encrypted
data in hybrid clouds, which is denoted as MRSE-HC. The keyword dictionary of
documents is clustered into balanced partitions by a bisecting k-means
clustering based keyword partition algorithm. According to the partitions, the
keyword partition based bit vectors are adopted for documents and queries which
are utilized as the index of searches. The private cloud filters out the
candidate documents by the keyword partition based bit vectors, and then the
public cloud uses the trapdoor to determine the result in the candidates.
Abstract: Mobile health (mHealth)
has emerged as a new patient centric model which allows real-time collection of
patient data via wearable sensors, aggregation and encryption of these data at mobile
devices, and then uploading the encrypted data to the cloud for storage and
access by healthcare staff and researchers. However, efficient and scalable
sharing of encrypted data has been a very challenging problem. In this paper,
we propose a Lightweight Sharable and Traceable (LiST) secure mobile health
system in which patient data are encrypted end-to-end from a patient’s mobile
device to data users.

Abstract: Vehicular cloud
computing (VCC) is composed of multiple distributed vehicular clouds (VCs),
which are formed on-the-fly by dynamically integrating underutilized vehicular
resources including computing power, storage, and so on. Existing proposals for
identity-as-a-service (IDaaS) are not suitable for use in VCC due to limited
computing resources and storage capacity of onboard vehicle devices. In this
paper, we first propose an improved ciphertext-policy attribute-bas Utilizing
the improved CP-ABE scheme and the permissioned blockchain technology, we
propose a lightweight and privacy-preserving IDaaS architecture for VCC named
IDaaSoVCC.ed encryption (CPABE) scheme.
Abstract: Frequent item set
mining, which is the essential operation in association rule mining, is one of
the most widely used data mining techniques on massive datasets nowadays. With
the dramatic increase on the scale of datasets collected and stored with cloud
services in recent years, it is promising to carry this computation-intensive
mining process in the cloud. Amount of work also transferred the approximate
mining computation into the exact computation, where such methods not only
improve the accuracy also aimto enhance the efficiency. However, while mining
data stored on public clouds, it inevitably introduces privacy concerns on
sensitive datasets.
Abstract: High availability is one of the core properties of Infrastructure as a Service (IaaS) and ensures that users have anytime access to on-demand cloud services. However, significant variations of workflow and the presence of super-tasks, mean that heterogeneous workload can severely impact the availability of IaaS clouds. Although previous work has investigated global queues, VM deployment, and failure of PMs, two aspects are yet to be fully explored: one is the impact of task size and the other is the differing features across PMs such as the variable execution rate and capacity. To address these challenges we propose an attribute-based availability model of large scale IaaS developed in the formal modeling language CARMA. The size of tasks in our model can be a fixed integer value or follow the normal, uniform or log-normal distribution.

IEEE 2018: An Efficient and Privacy-Preserving Biometric Identification Scheme in Cloud Computing
ABSTRACT : Bio-metric identification has
become increasingly popular in recent years.With the development of cloud
computing, database owners are motivated to outsource the large size of
biometric data and identification tasks to the cloud to get rid of the
expensive storage and computation costs, which, however,brings potential
threats to users' privacy. In this paper, we propose an efficient and
privacy-preserving bio-metric identi cation outsourcing scheme. Specfically,
the biometric To execute a biometric identi cation,the database owner encrypts
the query data and submits it to the cloud. The cloud performs identification operations
over the encrypted database and returns the result to the database owner. A
thorough security analysis indicates that the proposed scheme is secure
even if attackers can forge identification requests and collude with the
cloud. Compared with previous protocols, experimental results show that the
proposed scheme achieves a better performance in both preparation and
identification procedures.
IEEE 2018: Secure Attribute-Based
Signature Scheme With Multiple Authorities for Blockchain in Electronic
Health Records Systems
ABSTRACT : Electronic Health Records (EHRs) are
entirely controlled by hospitals instead of patients, which complicates seeking medical advices from
different hospitals. Patients face a critical need to focus on the details of their own healthcare and
restore management of their own medical data. The rapid development of blockchain technology promotes population
healthcare, including medical records as well as patient-related data. This technology provides patients
with comprehensive, immutable records, and access to EHRs free from service providers and treatment
websites. In this paper, to guarantee the validity of EHRs encapsulated in blockchain, we present an
attribute-based signature scheme with multiple authorities, in which a patient endorses a message according to the
attribute while disclosing no information other than the evidence that he has attested to it. Furthermore, there
are multiple authorities without a trusted single or central one to generate and distribute public/private
keys of the patient, which avoids the escrow problem and conforms to the mode of distributed data storage in
the blockchain. By sharing the secret pseudorandom function seeds among authorities, this protocol resists
collusion attack out of N from N 1 corrupted authorities. Under the assumption of the computational bilinear
Dif e-Hellman, we also formally demonstrate that, in terms of the unforgeability and perfect privacy of the
attribute-signer, this attribute-based signature scheme is secure in the random oracle model. The comparison
shows the ef ciency and properties between the proposed method and methods propose
Learn more :Click here Click here for other domain projects
Learn more :Click here Click here for other domain projects
ABSTRACT : User activity logs can be a valuable source of information in cloud forensic investigations; hence, ensuring the reliability and integrity of such logs is crucial. Most existing solutions for secure logging are designed for conventional systems rather than the complexity of a cloud environment. In this paper, we propose the Cloud Log Assuring Soundness and Secrecy (CLASS) process as an alternative scheme for the securing of logs in a cloud environment. In CLASS, logs are encrypted using the individual user’s public key so that only the user is able to decrypt the content. In order to prevent unauthorized modification of the log, we generate proof of past log (PPL) using Rabin’s fingerprint and Bloom filter. Such an approach reduces verification time significantly. Findings from our experiments deploying CLASS in OpenStack demonstrate the utility of CLASS in a real-world context.
IEEE 2018: DROPS:
Division and Replication of Data in Cloud for Optimal Performance and Security
ABSTRACT : Outsourcing
data to a third-party administrative control, as is done in cloud computing,
gives rise to security concerns. The data compromise may occur due to
attacks by other users and nodes within the cloud. Therefore, high security
measures are required to protect data within the cloud.
However, the employed security strategy must also take into account the
optimization of the data retrieval time. In this paper,
we propose Division and Replication of Data in the Cloud for Optimal
Performance and Security (DROPS) that collectively
approaches the security and performance issues. In the DROPS methodology, we
divide a file into fragments, and replicate the
fragmented data over the cloud nodes. Each of the nodes stores only a single
fragment of a particular data file that ensures
that even in case of a successful attack, no meaningful information is revealed
to the attacker. Moreover, the nodes storing the
fragments, are separated with certain distance by means of graph T-coloring to
prohibit an attacker of guessing the locations of
the fragments. Furthermore, the DROPS methodology does not rely on the
traditional cryptographic techniques for the data
security; thereby relieving the system of computationally expensive
methodologies. We show that the probability to locate and
compromise all of the nodes storing the fragments of a single file is extremely
low. We also compare the performance of the DROPS
methodology with ten other schemes. The higher level of security with slight performance overhead was observed.
ABSTRACT : Cloud computing is a very useful solution
to many individual users and organizations. It can
provide many services based on different needs and requirements.
However, there are many issues related to the user data that
need to be addressed when using cloud computing. Among the most
important issues are: data ownership, data privacy, and
storage. The users might be satisfied by the services provided by
the cloud computing service providers, since they need not worry about
the maintenance and storage of their data. On the other
hand, they might be worried about unauthorized access to their
private data. Some solutions to these issues were proposed in
the literature, but they mainly increase the cost and processing
time since they depend on encrypting the whole data. In this
paper, we are introducing a cloud computing framework that
classifies the data based on their importance. In other words, more
important data will be encrypted with more secure encryption
algorithm and larger key sizes, while less important data might
even not be encrypted. This approach is very helpful in reducing
the processing cost and complexity of data storage and
manipulation since we do not need to apply the same sophisticated
encryption techniques to the entire users data. The results of
applying the proposed framework show improvement and efficiency
over other existing frameworks.
Learn more :Click here Click here for other domain projects
Learn more :Click here Click here for other domain projects
IEEE 2018: Privacy
Preserving Ranked Multi-Keyword Search for Multiple Data Owners in Cloud Computing
ABSTRACT : With the advent of cloud computing, it has become increasingly popular for data owners to outsource their data to public cloud servers while allowing data users to retrieve this data. For privacy concerns, secure searches over encrypted cloud data has motivated several research works under the single owner model. However, most cloud servers in practice do not just serve one owner; instead, they support multiple owners to share the benefits brought by cloud computing. In this paper, we propose schemes to deal with Privacy preserving Ranked Multi-keyword Search in a Multi-owner model (PRMSM). To enable cloud servers to perform secure search without knowing the actual data of both keywords and trapdoors, we systematically construct a novel secure search protocol. To rank the search results and preserve the privacy of relevance scores between keywords and files, we propose a novel Additive Order and Privacy Preserving Function family. To prevent the attackers from eavesdropping secret keys and pretending to be legal data users submitting searches, we propose a novel dynamic secret key generation protocol and a new data user authentication protocol. Furthermore, PRMSM supports efficient data user revocation.Extensive experiments on real-world datasets confirm the efficacy and efficiency of PRMSM.
ABSTRACT : Cloud computing is the latest technology in the field of distributed computing. It provides various online and on-demand services for data storage, network services, platform services and etc. Many organizations are unenthusiastic to use cloud services due to data security issues as the data resides on the cloud services provider’s servers. To address this issue, there have been several approaches applied by various researchers worldwide to strengthen security of the stored data on cloud computing. The Bi-directional DNA Encryption Algorithm (BDEA) is one such data security techniques. However, the existing technique focuses only on the ASCII character set, ignoring the non-English user of the cloud computing. Thus, this proposed work focuses on enhancing the BDEA to use with the Unicode characters.
IEEE 2018: Anonymous
Authentication for Secure Data Stored on Cloud with Decentralized Access
Control
ABSTRACT : Decentralized storage system for accessing data with anonymous authentication provides more secure user authentication, user revocation and prevents replay attacks. Access control is processed on decentralized KDCs it is being more secure for data encryption. Generated decentralized KDC's are then grouped by (KGC). Our system provides authentication for the user, in which only system authorized users are able to decrypt, view the stored information. User validations and access control scheme are introduced in decentralized, which is useful for preventing replay attacks and supports modification of data stored in the cloud. The access control scheme is gaining more attention because it is important that only approved users have access to valid examine. Our scheme prevents supports creation, replay attacks, reading and modify data stored in the cloud. We also address user revocation. The problems of validation, access control, privacy protection should be solved simultaneously.
Learn more :Click here Click here for other domain projects
IEEE 2018: Enabling
Identity-Based Integrity Auditing and Data Sharing with Sensitive Information
Hiding for Secure Cloud Storage
ABSTRACT : With cloud storage services, users can remotely store their data to the cloud and realize the data sharing with others. Remote data integrity auditing is proposed to guarantee the integrity of the data stored in the cloud. In some common cloud storage systems such as the Electronic Health Records (EHRs) system, the cloud file might contain some sensitive information. The sensitive information should not be exposed to others when the cloud file is shared. Encrypting the whole shared file can realize the sensitive information hiding, but will make this shared file unable to be used by others. How to realize data sharing with sensitive information hiding in remote data integrity auditing still has not been explored up to now. In order to address this problem, we propose a remote data integrity auditing scheme that realizes data sharing with sensitive information hiding in this paper. In this scheme, a sanitizer is used to sanitize the data blocks corresponding to the sensitive information of the file and transforms these data blocks’ signatures into valid ones for the sanitized file. These signatures are used to verify the integrity of the sanitized file in the phase of integrity auditing. As a result, our scheme makes the file stored in the cloud able to be shared and used by others on the condition that the sensitive information is hidden, while the remote data integrity auditing is still able to be efficiently executed. Meanwhile, the proposed scheme is based on identity-based cryptography, which simplifies the complicated certificate management. The security analysis and the performance evaluation show that the proposed scheme is secure and efficient.
Click here for other domain projects
ABSTRACT : For data analytics jobs running across geographically distributed datacenters, coflows have to go through the interdatacenter network over relatively low bandwidth and high cost links. In this case, optimizing cost-performance tradeoffs for such coflows becomes crucial. Ideally, decreasing coflow completion time (CCT) can significantly improve the network performance, meanwhile, reducing the transmission cost introduced by these coflows is another fundamental goal for datacenter operators. Unfortunately, minimizing both CCT and the transmission cost are conflicting objectives which cannot be achieved concurrently. Prior methods have significant limitations when exploring such tradeoffs, because they either merely decrease the average CCT or reduce the transmission cost independently. In this paper, we focus on a cost-performance tradeoff problem for coflows running across the inter-datacenter network. Specifically, we formulate an optimization problem, so as to minimize a combination of both the average CCT and the average transmission cost. This problem is inherently hard to solve due to the unknown information of future coflows. We therefore present Lever, an online coflowaware optimization framework, to balance these two conflicting objectives. Without any prior knowledge of future coflows, Lever has been proved to have a non-trivial competitive ratio in solving this cost-performance tradeoff problem. Results from large-scale simulations demonstrate that Lever can significantly reduce the average transmission cost, and at the same time, speed up the completion of these coflows, compared to state-of-the-art solutions.
Click here for other domain projects
IEEE 2018: Stability of Evolving Fuzzy Systems based on Data
Clouds
ABSTRACT :Evolving fuzzy systems (EFSs) are now well develope and widely used thanks to their ability to self-adapt both their structures and parameters online. Since the concept was firstly introduced two decades ago, many different types of EFSs have been successfully implemented. However, there are only very few works considering the stability of the EFSs, and thesestudies were limited to certain types of membership functions with specifically pre-defined parameters, which largely increases the complexity of the learning process. At the same time, stability analysis is of paramount importance for control applications and provides the theoretical guarantees for the convergence of the learning algorithms. In this paper, we introduce the stability proofof a class of EFSs based on data clouds, which are grounded at the AnYa type fuzzy systems and the recently introduced empirical data analysis (EDA) methodological framework. By employing data clouds, the class of EFSs of AnYa type considered in this work avoids the traditional way of defining membership functions for each input variable in an explicit manner and its learning process is entirely data-driven. The stability of the considered EFS of AnYa type is proven through the Lyapunov theory, and the proof of stability shows that the average identification error converges to a small neighborhood of zero. Although, the stability proof presented in this paper is specially elaborated for the considered EFS, it is also applicable to general EFSs. The proposed method is illustrated with Box-Jenkins gas furnace problem, one nonlinear system identification problem, Mackey-Glass time series prediction problem, eight real-world benchmark regression problems as well as a high frequency trading prediction problem. Compared with other EFSs, the numerical examples show that the considered EFS in this paper provides guaranteed stability
as well as a better approximation accuracy.
Click here for other domain projects
IEEE 2018: Anonymous and Traceable Group Data Sharing in Cloud Computing
Click here for other domain projects
IEEE 2018: Efficient and Expressive Keyword Search Over Encrypted Data in Cloud

IEEE 2017: Two-Factor Data Access Control With Efficient Revocation for Multi-Authority Cloud Storage Systems
ABSTRACT : Attribute-based encryption, especially for ciphertext-policy
attribute-based encryption, can fulfill the functionality of fine-grained
access control in cloud storage systems. Since users' attributes may be issued
by multiple attribute authorities, multi-authority ciphertext-policy
attribute-based encryption is an emerging cryptographic primitive for enforcing
attribute-based access control on outsourced data. However, most of the
existing multi-authority attribute-based systems are either insecure in
attribute-level revocation or lack of efficiency in communication overhead and
computation cost. In this paper, we propose anattribute-based access control
scheme with two-factor protection for multi-authority cloud storage systems. In
our proposed scheme, any user can recover the outsourced data if and only if
this user holds sufficient attribute secret keys with respect to the access
policy and authorization key in regard to the outsourced data. In addition, the
proposed scheme enjoys the properties of constant-size ciphertext and small
computation cost. Besides supporting the attribute-level revocation, our
proposed scheme allows data owner to carry out the user-level revocation. The
security analysis, performance comparisons, and experimental results indicate
that our proposed scheme is not only secure but also practical.Read More
IEEE 2017: FastGeo:
Efficient Geometric Range Queries on Encrypted Spatial Data
IEEE 2017: FastGeo:
Efficient Geometric Range Queries on Encrypted Spatial Data

IEEE 2017: Practical Privacy-Preserving Content-Based Retrieval in Cloud Image Repositories
ABSTRACT :Storage
requirements for visual data have been increasing in recent years, following
the emergence of many highly interactive multimedia services and applications
for mobile devices in both personal and corporate scenarios. This has been a
key driving factor for the adoption of cloud-based data outsourcing solutions.
However, outsourcing data storage to the Cloud also leads to new security
challenges that must be carefully addressed, especially regarding privacy. In
this paper we propose a secure framework for outsourced privacy-preserving
storage and retrieval in large shared image repositories. Our proposal is based
on IES-CBIR, a novel Image Encryption Scheme that exhibits Content-Based Image
Retrieval properties. The framework enables both encrypted storage and
searching using Content-Based Image Retrieval queries while preserving privacy
against honest-but-curious cloud administrators. We have built a prototype of
the proposed framework, formally analyzed and proven its security properties,
and experimentally evaluated its performance and retrieval precision. Read More
IEEE 2017: Temporal Task Scheduling With Constrained Service Delay for Profit Maximization in Hybrid Clouds
ABSTRACT :As
cloud computing is becoming growingly popular, consumers’ tasks around the
world arrive in cloud data centers. A private cloud provider aims to achieve
profit maximization by intelligently scheduling tasks while guaranteeing the
service delay bound of delay-tolerant tasks. However, the aperiodicity of arrival
tasks brings a challenging problem of how to dynamically schedule all arrival
tasks given the fact that the capacity of a private cloud provider is limited.
Previous works usually provide an admission control to intelligently refuse
some of arrival tasks. Nevertheless, this will decrease the throughput of a
private cloud, and cause revenue loss. This paper studies the problem of how to
maximize the profit of a private cloud in hybrid clouds while guaranteeing the
service delay bound of delay-tolerant tasks. We propose a profit maximization
algorithm (PMA) to discover the temporal variation of prices in hybrid clouds.
The temporal task scheduling provided by PMA can dynamically schedule all arrival
tasks to execute in private and public clouds. The sub problem in each
iteration of PMA is solved by the proposed hybrid heuristic optimization
algorithm, simulated annealing particle swarm optimization (SAPSO). Besides,
SAPSO is compared with existing baseline algorithms. Extensive simulation
experiments demonstrate that the proposed method can greatly increase the throughput
and the profit of a private cloud while guaranteeing the service delay bound.Read More
IEEE 2017: Optimizing Cloud-Service Performance: Efficient Resource Provisioning via Optimal Workload Allocation

IEEE 2017: Live Data Analytics With Collaborative Edge and Cloud Processing in Wireless IoT Networks
ABSTRACT :Recently, big data analytics has received
important attention in a variety of application domains including business,
finance, space science, healthcare, telecommunication and Internet of Things
(IoT). Among these areas, IoT is considered as an important platform in
bringing people, processes, data and things/objects together in order to
enhance the quality of our everyday lives. However, the key challenges are how
to effectively extract useful features from the massive amount of heterogeneous
data generated by resource-constrained IoT devices in order to provide
real-time information and feedback to the endusers, and how to utilize this
data-aware intelligence in enhancing the performance of wireless IoT networks.
Although there are parallel advances in cloud computing and edge computing for
addressing some issues in data analytics, they have their own benefits and
limitations. The convergence of these two computing paradigms, i.e., massive
virtually shared pool of computing and storage resources from the cloud and
real time data processing by edge computing, could effectively enable live data
analytics in wireless IoT networks. In this regard, we propose a novel
framework for coordinated processing between edge and cloud
computing/processing by integrating advantages from both the platforms. The
proposed framework can exploit the network-wide knowledge and historical
information available at the cloud center to guide edge computing units towards
satisfying various performance requirements of heterogeneous wireless IoT
networks. Starting with the main features, key enablers and the challenges of
big data analytics, we provide various synergies and distinctions between cloud
and edge processing. More importantly, we identify and describe the potential
key enablers for the proposed edge-cloud collaborative framework, the
associated key challenges and some interesting future research directions.
Read More
IEEE 2017: Optimizing Green
Energy, Cost, and Availability in Distributed Data Centers
ABSTRACT : Integrating renewable energy and ensuring high availability are among
two major requirements for geo distributed data centers. Availability is
ensured by provisioning spare capacity across the data centers to mask data
center failures (either partial or complete). We propose a mixed integer linear
programming formulation for capacity planning while minimizing the total cost
of ownership (TCO) for highly available, green, distributed data centers. We
minimize the cost due to power consumption and server deployment, while
targeting a minimum usage of green energy. Solving our model shows that
capacity provisioning considering green energy integration, not only lowers
carbon footprint but also reduces the TCO. Results show that up to 40% green
energy usage is feasible with marginal increase in the TCO compared to the
other cost-aware models.Read More
IEEE 2017: Cost Minimization
Algorithms for Data Center Management
ABSTRACT :Due to the increasing usage of cloud computing applications, it is
important to minimize energy cost consumed by a data center, and
simultaneously, to improve quality of service via data center management. One
promising approach is to switch some servers in
a data center to the idle mode for saving energy while to keep a
suitable number of servers in the active mode for providing timely service. In
this paper, we design both online and offline algorithms for this problem. For
the offline algorithm, we formulate data center management as a cost minimization
problem by considering energy cost, delay cost (to measure service quality),
and switching cost (to change servers’ active/idle mode). Then, we analyze
certain properties of an optimal solution which lead to a dynamic programming
based algorithm. Moreover, by revising the solution procedure, we successfully
eliminate the recursive procedure and achieve an optimal offline algorithm with
a polynomial complexity.Read More
IEEE 2017: Vehicular Cloud Data
Collection for Intelligent Transportation Systems

IEEE 2017: RAAC: Robust and
Auditable Access Control with Multiple Attribute Authorities for Public Cloud
Storage
ABSTRACT :Data access control is a challenging issue in public cloud storage
systems. Ciphertext-Policy Attribute-Based En-cryption (CP-ABE) has been
adopted as a promising technique to provide flexible, fine-grained and secure
data access control for cloud storage with honest-but-curious cloud servers.
However, in the existing CP-ABE schemes, the single attribute authority must
execute the time-consuming user legitimacy verification and secret key
distribution, and hence it results in a single-point performance bottleneck
when a CP-ABE scheme is adopted in a large-scale cloud storage system. Users
may be stuck in the waiting queue for a long period to obtain their secret
keys, thereby resulting in low-efficiency of the system. Although
multi-authority access control schemes have been proposed, these schemes still
cannot overcome the drawbacks of single-point bottleneck and low efficiency,
due to the fact that each of the authorities still independently manages a
disjoint attribute set.Read More
IEEE 2017: Privacy-Preserving
Data Encryption Strategy for Big Data in Mobile Cloud Computing
ABSTRACT : Privacy has become a considerable issue when the applications of big
data are dramatically growing in cloud computing. The benefits of the
implementation for these emerging technologies have improved or changed service
models and improve application performances in various perspectives. However,
the remarkably growing volume of data sizes has also resulted in many
challenges in practice. The execution time of the data encryption is one of the
serious issues during the data processing and transmissions. Many current
applications abandon data encryptions in order to reach an adoptive performance
level companioning with privacy concerns. In this paper, we concentrate on
privacy and propose a novel data encryption approach, which is called Dynamic
Data Encryption Strategy (D2ES). Our proposed approach aims to selectively
encrypt data and use privacy classification methods under timing constraints.
This approach is designed to maximize the privacy protection scope by using a
selective encryption strategy within the required execution time requirements.Read More
IEEE 2017: Identity-Based Remote
Data Integrity Checking With Perfect Data Privacy Preserving for Cloud Storage
ABSTRACT : Remote data integrity checking (RDIC)
enables a data storage server, says a cloud server, to prove to a verifier that
it is actually storing a data owner’s data honestly. To date, a number of RDIC
protocols have been proposed in the literature, but most of the constructions
suffer from the issue of a complex key management, that is, they rely on the
expensive public key infrastructure (PKI), which might hinder the deployment of
RDIC in practice. In this paper, we propose a new construction of
identity-based (ID-based) RDIC protocol by making use of key- homomorphic
cryptographic primitive to reduce the system complexity and the cost for establishing
and managing the public key authentication framework in PKI-based RDIC schemes.
We formalize ID-based RDIC and its security model, including security against a
malicious cloud server and zero knowledge privacy against a third party
verifier. The proposed ID-based RDIC protocol leaks no information of the
stored data to the verifier during the RDIC process. The new construction is
proven secure against the malicious server in the generic group model and
achieves zero knowledge privacy against a verifier. Extensive security analysis
and implementation results demonstrate that the proposed protocol is provably
secure and practical in the real-world applications. Read More
IEEE 2017: Identity-Based Data
Outsourcing with Comprehensive Auditing in Clouds
ABSTRACT :Cloud storage system provides facilitative
file storage and sharing services for distributed clients. To address
integrity, controllable outsourcing and origin auditing concerns on outsourced
files, we propose an identity-based data outsourcing (IBDO) scheme equipped
with desirable features advantageous over existing proposals in securing
outsourced data. First, our IBDO scheme allows a user to authorize dedicated
proxies to upload data to the cloud storage server on her behalf, e.g., a
company may authorize some employees to upload files to the company’s cloud
account in a controlled way. The proxies are identified and authorized with
their recognizable identities, which eliminates complicated certificate
management in usual secure distributed computing systems. Second, our IBDO
scheme facilitates comprehensive auditing, i.e., our scheme not only permits
regular integrity auditing as in existing schemes for securing outsourced data,
but also allows to audit the information on data origin, type and consistence
of outsourced files.Read More
IEEE 2017: TAFC: Time and
Attribute Factors Combined Access Control on Time-Sensitive Data in Public
Cloud

IEEE 2017: Attribute-Based
Storage Supporting Secure Deduplication of Encrypted Data in Cloud
ABSTRACT :Attribute-based encryption (ABE) has
been widely used in cloud computing where a data provider outsources his/her
encrypted data to a cloud service provider, and can share the data with users
possessing specific credentials (or attributes). However, the standard ABE
system does not support secure deduplication, which is crucial for eliminating
duplicate copies of identical data in order to save storage space and network
bandwidth. In this paper, we present an attribute-based storage system with
secure deduplication in a hybrid cloud setting, where a private cloud is
responsible for duplicate detection and a public cloud manages the storage.
Compared with the prior data deduplication systems, our system has two
advantages. Firstly, it can be used to confidentially share data with users by
specifying access policies rather than sharing decryption keys. Secondly, it
achieves the standard notion of semantic security for data confidentiality
while existing systems only achieve it by defining a weaker security notion.
IEEE 2017: A
Collision-Mitigation Cuckoo Hashing Scheme for Large-scale Storage Systems
ABSTRACT : With the rapid growth of the amount of
information, cloud computing servers need to process and analyze large amounts
of high-dimensional and unstructured data timely and accurately. This usually
requires many query operations. Due to simplicity and ease of use, cuckoo
hashing schemes have been widely used in real-world cloud-related applications.
However due to the potential hash collisions, the cuckoo hashing suffers from
endless loops and high insertion latency, even high risks of re-construction of
entire hash table. In order to address these problems, we propose a
cost-efficient cuckoo hashing scheme, called MinCounter. The idea behind MinCounter
is to alleviate the occurrence of endless loops in the data insertion by selecting
unbusy kicking-out routes. MinCounter selects the “cold” (infrequently
accessed), rather than random, buckets to handle hash collisions. We further
improve the concurrency of the MinCounter scheme to pursue higher performance
and adapt to concurrent applications. MinCounter has the salient features of
offering efficient insertion and query services and delivering high performance
of cloud servers, as well as enhancing the experiences for cloud users. We have
implemented MinCounter in a large-scale cloud test bed and examined the
performance by using three real-world traces. Extensive experimental results
demonstrate the efficacy and efficiency of MinCounter.Read More
IEEE 2016: Reducing
Fragmentation for In-line Deduplication Backup Storage via Exploiting Backup
History and Cache Knowledge

IEEE 2016: Secure Data Sharing
in Cloud Computing Using Revocable-Storage Identity-Based Encryption

IEEE 2016: Key-Aggregate
Searchable Encryption (KASE) for Group Data Sharing via Cloud Storage

IEEE 2016: Public Integrity
Auditing for Shared Dynamic Cloud Data with Group User Revocation
ABSTRACT : The advent of the cloud
computing makes storage outsourcing becomes a rising trend, which promotes the
secure remote data auditing a hot topic that appeared in the research
literature. Recently some research considers the problem of secure and
efficient public data integrity auditing for shared dynamic data. However,
these schemes are still not secure against the collusion of cloud storage
server and revoked group users during user revocation in practical cloud
storage system. In this paper, we figure out the collusion attack in the
exiting scheme and provide an efficient public integrity auditing scheme with
secure group user revocation based on vector commitment and verifier-local
revocation group signature. We design a concrete scheme based on the our scheme
definition. Our scheme supports the public checking and efficient user
revocation and also some nice properties, such as confidently, efficiency,
countability and traceability of secure group user revocation. Finally, the
security and experimental analysis show that, compared with its relevant
schemes our scheme is also secure and efficient.
IEEE 2016: Secure Auditing and
Duplicating Data in Cloud
ABSTRACT : As the cloud computing
technology develops during the last decade outsourcing data to cloud service
for storage becomes an attractive trend, which benefits in sparing efforts on
heavy data maintenance and management. Nevertheless, since the outsourced cloud
storage is not fully trustworthy, it raises security concerns on how to realize
data deduplication in cloud while achieving integrity auditing. In this work,
we study the problem of integrity auditing and secure deduplication on cloud
data. Specifically, aiming at achieving both data integrity and deduplication
in cloud, we propose two secure systems, namely Sec Cloud and Sec Cloud . Sec
Cloud introduces an auditing entity with a maintenance of a Map Reduce cloud,
which helps clients generate data tags before uploading as well as audit the
integrity of data having been stored in cloud. Compared with previous work, the
computation by user in Sec Cloud greatly reduced during the file uploading and
auditing phases. Sec Cloud is designed motivated by the fact that customers
always want to encrypt their data before uploading, and enables integrity
auditing and secure deduplication on encrypted data.
IEEE 2016 Cloud Computing

IEEE 2016: CloudArmor:
Supporting Reputation-based Trust Management for Cloud Services
IEEE 2016 Cloud Computing

IEEE 2016: Secure
Optimization Computation Outsourcing in Cloud Computing:  A Case Study
of Linear Programming
IEEE 2016 Cloud Computing

IEEE 2016: Ensures
Dynamic access and Secure E-Governance system in Clouds Services – EDSE
IEEE 2016 Cloud Computing

IEEE 2016: On
Traffic-Aware Partition and Aggregation in MapReduce for Big Data
Applications
IEEE 2016 Cloud Computing

IEEE 2016: A Secure
and Dynamic Multi-keyword Ranked Search Scheme over Encrypted Cloud Data
IEEE 2016 Cloud Computing

IEEE 2016: An
Efficient Privacy-Preserving Ranked Keyword Search Method
IEEE 2016 Cloud Computing

IEEE
2016: Differentially Private Online Learning for Cloud-Based Video
Recommendation with Multimedia Big Data in Social Networks
IEEE 2016
Cloud Computing

IEEE
2016: Fine-Grained Two-Factor Access Control for Web-Based Cloud Computing
Services
IEEE 2016
Cloud Computing

IEEE
2016: Dual-Server Public-Key Encryption with Keyword Search for Secure
Cloud Storage
IEEE 2016
Cloud Computing

IEEE 2016: DeyPoS:
Deduplicatable Dynamic Proof of Storage for Multi-User Environments
IEEE 2016
Cloud Computing

IEEE
2016: KSF-OABE: Outsourced Attribute-Based Encryption with Keyword Search
Function for Cloud Storage
IEEE 2016
Cloud Computing

IEEE 2016: SecRBAC:
Secure data in the Clouds
IEEE 2016 Cloud Computing

IEEE 2015: Identity-based
Encryption with Outsourced Revocation in Cloud Computing
IEEE 2015 Cloud Computing

IEEE 2015: Control
Cloud Data Access Privilege and Anonymity With Fully Anonymous Attribute-Based
Encryption
IEEE 2015 Cloud Computing

IEEE 2015: DROPS:
Division and Replication of Data in Cloud for Optimal Performance and Security
IEEE 2015 Cloud Computing

IEEE 2015: An
Efficient Green Control Algorithm in Cloud Computing for Cost Optimization
IEEE 2015 Cloud Computing

IEEE 2015: Revisiting
Attribute-Based Encryption With Verifiable Outsourced Decryption
IEEE 2015 Cloud Computing

IEEE 2015: I-sieve: An inline
high performance deduplication system used in cloud storage

IEEE 2015: A Hybrid Cloud
Approach for Secure Authorized Deduplication

IEEE 2015: Cost-Minimizing
Dynamic Migration of Content Distribution Services into Hybrid Clouds
Abstract: With the recent advent of cloud
computing technologies, a growing number of content distribution applications
are contemplating a switch to cloud-based services, for better scalability and
lower cost. Two key tasks are involved for such a move: to migrate the contents
to cloud storage, and to distribute the Web service load to cloud-based Web
services. The main issue is to best utilize the cloud as well as the
application provider's existing private cloud, to serve volatile requests with
service response time guarantee at all times, while incurring the minimum
operational cost. While it may not be too difficult to design a simple
heuristic, proposing one with guaranteed cost optimality over a long run of the
system constitutes an intimidating challenge. Employing Lyapunov optimization
techniques, we design a dynamic control algorithm to optimally place contents and
dispatch requests in a hybrid cloud infrastructure spanning geo-distributed
data centers, which minimizes overall operational cost overtime, subject to
service response time constraints. Rigorous analysis shows that the algorithm
nicely bounds the response times within the preset QoS target, and guarantees
that the overall cost is within a small constant gap from the optimum achieved
by a T-slot look ahead mechanism with known future information. We verify the
performance of our dynamic algorithm with prototype-based evaluation.
IEEE 2015: Cloud Sky: A
Controllable Data Self-Destruction System for Un trusted Cloud Storage Networks

IEEE 2015: Energy-aware Load
Balancing and Application Scaling for the Cloud Ecosystem

IEEE 2015: SecDep: A user-aware
efficient fine-grained secure deduplication scheme with multi-level key
management

IEEE 2015:A
Secure Client Side De duplication Scheme in Cloud Storage Environments

IEEE 2015 :Adaptive
Algorithm for Minimizing Cloud Task Length with Prediction Errors

An
Efficient Information Retrieval Approach for Collaborative Cloud Computing
Abstract—the collaborative cloud computing (CCC) which is collaboratively supported by various organizations (Google, IBM, AMAZON, MICROSOFT) offers a promising future for information retrieval. Human beings tend to keep things simple by moving the complex aspects to computing. As a consequence, we prefer to go to one or a limited number of sources for all our information needs. In contemporary scenario where information is replicated, modified (value added), and scattered geographically; retrieving information in a suitable form requires lot more effort from the user and thus difficult. For instance, we would like to go directly to the source of information and at the same time not to be burdened with additional effort. This is where, we can make use of learning systems (Neural Network based) that can intelligently decide and retrieve the information that we need by going directly to the source of information. This also, reduces single point of failure and eliminates bottlenecks in the path of information flow, Reduces the Time delay and it provide remarkable ability to overcome from traffic conjection complicated patterns. It makes Efficient information retrieval approach for collaborative cloud computing. both secure and verifiable, without relying on random oracles. Finally, we show an implementation of our
Building
Confidential and Efficient Query Services in the Cloud with RASP Data
Perturbation
Abstract—With
the wide deployment of public cloud computing infrastructures, using clouds to
host data query services has become an appealing solution for the advantages on
scalability and cost-saving. However, some data might be sensitive that the data
owner does not want to move to the cloud unless the data confidentiality and
query privacy are guaranteed. On the other hand, a secured query service should
still provide efficient query processing and significantly reduce the in-house
workload to fully realize the benefits of cloud computing. We propose the RASP
data perturbation method to provide secure and efficient range query and kNN
query services for protected data in the cloud. The RASP data perturbation
method combines order preserving encryption, dimensionality expansion, random
noise injection, and random projection, to provide strong resilience to attacks
on the perturbed data and queries. It also preserves multidimensional ranges,
which allows existing indexing techniques to be applied to speedup range query
processing. The kNN-R algorithm is designed to work with the RASP range query
algorithm to process the kNN queries. We have carefully analyzed the attacks on
data and queries under a precisely defined threat model and realistic security
assumptions. Extensive experiments have been conducted to show the advantages
of this approach on efficiency and security.
Compatibility-aware
Cloud Service Composition under Fuzzy Preferences of Users
Abstract—When
a single Cloud service (i.e., a software image and a virtual machine), on its
own, cannot satisfy all the user requirements, a composition of Cloud services
is required. Cloud service composition, which includes several tasks such as discovery,
compatibility checking, selection, and deployment, is a complex process and
users find it difficult to select the best one among the hundreds, if not
thousands, of possible compositions available. Service composition in Cloud
raises even new challenges caused by diversity of users with different
expertise requiring their applications to be deployed across difference geographical
locations with distinct legal constraints. The main difficulty lies in
selecting a combination of virtual appliances (software images) and
infrastructure services that are compatible and satisfy a user with vague
preferences. Therefore, we Present
a framework and algorithms which simplify Cloud service composition for
unskilled users. We develop an ontology based approach to analyze Cloud service
compatibility by applying reasoning on the expert knowledge. In addition, to
minimize effort of users in expressing their preferences, we apply combination
of evolutionary algorithms and fuzzy logic for composition optimization. This
lets users express their needs in linguistics terms which brings a great
comfort to them compared to systems that force users to assign exact weights
for all preferences.
Consistency
as a Service: Auditing Cloud Consistency
Abstract—Cloud storage services have become commercially popular due to their overwhelming advantages. To provide ubiquitous always-on access, a cloud service provider (CSP) maintains multiple replicas for each piece of data on geographically distributed servers. A key problem of using the replication technique in clouds is that it is very expensive to achieve strong consistency on a worldwide scale. In this paper, we first present a novel consistency as a service (CaaS) model, which consists of a large data cloud and multiple small audit clouds. In the CaaS model, a data cloud is maintained by a CSP, and a group of users that constitute an audit cloud can verify whether the data cloud provides the promised level of consistency or not. We propose a two-level auditing architecture, which only requires a loosely synchronized clock in the audit cloud. Then, we design Algorithms to quantify the severity of violations with two metrics: the commonality of violations, and the staleness of the value of a read. Finally, we devise a heuristic auditing strategy (HAS) to reveal as many violations as possible. Extensive experiments were performed using a combination of simulations and real cloud deployments to validate HAVE.
Data
Similarity-Aware Computation Infrastructure for the Cloud
Abstract—the cloud is emerging for scalable and efficient cloud
services. To meet the needs of handling massive data and decreasing data
migration, the computation infrastructure requires efficient data placement and
proper management for cached data. In this paper, we propose an efficient and
cost-effective multilevel caching scheme, called MERCURY, as computation
infrastructure of the cloud. The idea behind MERCURY is to explore and exploit
data similarity and support efficient data placement. To accurately and efficiently
capture the data similarity, we leverage a low-complexity locality-sensitive
hashing (LSH). In our design, in addition to the problem of space inefficiency,
we identify that a conventional LSH scheme also suffers from the problem of
homogeneous data placement. To address these two problems, we design a novel
multi core-enabled locality-sensitive hashing (MC-LSH) that accurately captures
the differentiated similarity across data. The similarity-aware MERCURY, hence,
partitions data into the L1 cache, L2 cache, and main memory based on their
distinct localities, which help optimize cache utilization and minimize the
pollution in the last-level cache. Besides extensive evaluation through
simulations, we also implemented MERCURY in a system. Experimental results
based
On real-world applications and data sets demonstrate the
efficiency and efficacy of our proposed schemes.
Maximizing
Revenue with Dynamic Cloud Pricing: The Infinite Horizon Case
Abstract—we study the infinite horizon dynamic pricing problem for
an infrastructure cloud provider in the emerging cloud computing paradigm. The
cloud provider, such as Amazon, provides computing capacity in the form of
virtual instances and charges customers a time-varying price for the period
they
use the instances. The provider’s problem is then to find an optimal
pricing policy, in face of stochastic demand arrivals and departures, so that
the average expected revenue is maximized in the long run. We adopt a revenue
management framework to tackle the problem. Optimality conditions and
structural results are obtained for our stochastic formulation, which yield
insights on the optimal pricing strategy. Numerical results verify our analysis
and reveal additional properties of optimal pricing policies for the
Infinite horizon case.
Enabling
Data Integrity Protection in Regenerating-Coding-Based Cloud Storage

Design and implement a practical data integrity protection (DIP) scheme
for a specific regenerating code, while preserving the intrinsic properties of
fault tolerance and repair traffic saving. Our DIP scheme is designed under a
Byzantine adversarial model, and enables a client to feasibly verify the
integrity of random subsets of outsourced data against general or malicious corruptions.
It works under the simple assumption of thin-cloud storage and allows different
parameters to be fine-tuned for the performance-security trade-off. We
implement and evaluate the overhead of our DIP scheme in a real cloud storage
test bed under different parameter choices. We demonstrate that remote integrity
checking can be feasibly integrated into regenerating codes in practical
deployment.
Abstract—Data sharing is an important functionality in cloud
storage. In this article, we show how to securely, efficiently, and flexibly
share data with others in cloud storage. We describe new public-key cryptosystems
which produce constant-size cipher texts such that efficient delegation of
decryption rights for any set of cipher texts are possible. The novelty is that
one can aggregate any set of secret keys and make them as compact as a single
key, but encompassing the power of all the keys being aggregated. In other
words, the secret key holder can release a constant-size aggregate key for
flexible choices of cipher text set in cloud storage, but the other encrypted
files outside the set remain confidential. This compact aggregate key can be
conveniently sent to others or be stored in a smart card with very limited
secure storage. We provide formal security analysis of our schemes in the
standard model. We also describe other application of our schemes. In particular,
our schemes give the first public-key patient-controlled encryption for
flexible hierarchy, which was yet to be known.
Low-Carbon
Routing Algorithms for Cloud Computing Services in IP-over-WDM Networks

NCCloud:
A Network-Coding-Based Storage System in a Cloud-of-Clouds
Abstract—to provide fault tolerance for cloud storage, recent
studies propose to stripe data across multiple cloud vendors. However, if a
cloud suffers from a permanent failure and loses all its data, we need to
repair the lost data with the help of the other surviving clouds to preserve
data redundancy. We present a proxy-based storage system for fault-tolerant
multiple-cloud storage called NCCloud, which achieves cost-effective repair for
a permanent single-cloud failure. NCCloud is built on top of a
network-coding-based storage scheme called the functional minimum-storage
regenerating (FMSR) codes, which maintain the same fault tolerance and data
redundancy as in traditional erasure codes (e.g., RAID-6), but use less repair
traffic and hence incur less monetary cost due to data transfer. One key design
feature of our FMSR codes is that we relax the encoding requirement of storage
nodes during repair, while preserving the benefits of network coding in repair.
We implement a proof-of-concept prototype of NCCloud and deploy it atop both
local and commercial clouds. We validate that FMSR codes provide significant
monetary cost savings in repair over RAID-6 codes, while having comparable
response time performance in normal cloud storage operations such as upload/download.
Integrity
Verification in Multi-Cloud Storage Using Cooperative Provable Data Possession
Abstract- Storage outsourcing in cloud computing is a rising trend
which prompts a number of interesting security issues. Provable data possession
(PDP) is a method for ensuring the integrity of data in storage outsourcing.
This research addresses the construction of efficient PDP which called as Cooperative
PDP (CPDP) mechanism for distributed cloud storage to support data migration
and scalability of service, which considers the existence of multiple cloud
service providers to collaboratively store and maintain the clients’ data. Cooperative
PDP (CPDP) mechanism is based on homomorphic verifiable response, hash index
hierarchy for dynamic scalability, cryptographic encryption for security.
Moreover, it proves the security of scheme based on multi-prover zero knowledge
proof system, which can satisfy knowledge soundness, completeness, and
zero-knowledge properties. This research introduces lower computation and
communication overheads in comparison with non-cooperative approaches.
Optimal
Power Allocation and Load Distribution for Multiple Heterogeneous Multi core
Server Processors across Clouds and Data Centers
Abstract—for multiple heterogeneous multi core server processors
across clouds and data centers, the aggregated performance of the cloud of
clouds can be optimized by load distribution and balancing. Energy efficiency
is one of the most important issues for large scale server systems in current
and future data centers. The multi core processor technology provides new
levels of performance and energy efficiency. The present paper aims to develop
power and performance constrained load distribution methods for cloud computing
in current and future large-scale data centers. In particular, we address the
problem of optimal power allocation and load distribution for multiple
heterogeneous multi core server processors across clouds and data centers. Our
strategy is to formulate optimal power allocation and load distribution for
multiple servers in a cloud of clouds as optimization problems, i.e., power
constrained performance optimization and performance constrained power
optimization. Our research problems in large-scale data centers are well-defined
multivariable optimization problems, which explore the power-performance
tradeoff by fixing one factor and minimizing the other, from the perspective of
optimal load distribution. It is clear that such power and performance optimization
is important for a cloud computing provider to efficiently utilize all the
available resources. We model a multi core server processor as a queuing system
with multiple servers. Our optimization problems are solved for two different
models of core speed, where one model assumes that a core runs at zero speed
when it is idle, and the other model assumes that a core runs at a constant
speed. Our results in this paper provide new theoretical insights into power
management and performance optimization in data centers.
Oruta:
Privacy-Preserving Public Auditing for Shared Data in the Cloud
Abstract—with cloud storage services, it is commonplace for data
to be not only stored in the cloud, but also shared across multiple users.
However, public auditing for such shared data — while preserving identity
privacy — remains to be an open challenge. In this paper, we propose the first
privacy-preserving mechanism that allows public auditing on shared data stored
in the cloud. In particular, we exploit ring signatures to compute the
verification information needed to audit the integrity of shared data. With our
mechanism, the identity of the signer on each block in shared data is kept
private from a third party auditor (TPA), who is still able to verify the integrity
of shared data without retrieving the entire file. Our experimental results demonstrate
the effectiveness and efficiency of our proposed mechanism when auditing shared
data.
Towards
Differential Query Services in Cost-Efficient Clouds
Abstract—Cloud computing as an emerging technology trend is
expected to reshape the advances in information technology. In a cost efficient
cloud environment, a user can tolerate a certain degree of delay while
retrieving information from the cloud to reduce costs. In this paper, we
address two fundamental issues in such an environment: privacy and efficiency.
We first review a private keyword-based file retrieval scheme that was
originally proposed by Ostrovsky. Their scheme allows a user to retrieve files
of interest from an un trusted server without leaking any information. The main
drawback is that it will cause a heavy querying overhead incurred on the cloud,
and thus goes against the original intention of cost efficiency. In this paper,
we present a scheme, termed efficient information retrieval for ranked query
(EIRQ), based on an aggregation and distribution layer (ADL), to reduce
querying overhead incurred on the cloud. In EIRQ, queries are classified into
multiple ranks, where a higher ranked query can retrieve a higher percentage of
matched files. A user can retrieve files on demand by choosing queries of
different ranks. This feature is useful when there are a large number of
matched files, but the user only needs a small subset of them. Under different
parameter settings, extensive evaluations have been conducted on both
analytical models and on a real cloud environment, in order to examine the
effectiveness of our schemes.
Scalable
Distributed Service Integrity Attestation for Software-as-a-Service Clouds
Abstract—Software-as-a-Service
(SaaS) cloud systems enable application service providers to deliver their
applications via massive cloud computing infrastructures. However, due to their
sharing nature, SaaS clouds are vulnerable to malicious attacks. In this paper,
we present IntTest, a scalable and effective service integrity attestation
framework for SaaS clouds. Int Test provides a novel integrated attestation
graph analysis scheme that can provide stronger attacker pinpointing power than
previous schemes. Moreover, IntTest can automatically enhance result quality by
replacing bad results produced by malicious attackers with good results
produced by benign service providers. We have implemented a prototype of the
IntTest system and tested it on a production cloud computing infrastructure
using IBM System S stream processing applications. Our experimental results
show that IntTest can achieve higher attacker pinpointing accuracy than
existing approaches. IntTest does not require any special hardware or secure
kernel support and imposes little performance impact to the application, which
makes it practical for large scale cloud systems.
QoS-Aware
Data Replication for Data-Intensive Applications in Cloud Computing Systems
Abstract—Cloud computing provides scalable computing and storage resources. More and more data-intensive applications are developed in this computing environment. Different applications have different quality-of-service (QoS) requirements. To continuously support the QoS requirement of an application after data corruption, we propose two QoS-aware data replication (QADR) algorithms in cloud computing systems. The first algorithm adopts the intuitive idea of high-QoS first-replication (HQFR) to perform data replication. However, this greedy algorithm cannot minimize the data replication cost and the number of QoS-violated data replicas. To achieve these two minimum objectives, the second algorithm transforms the QADR problem into the well-known minimum-cost maximum-flow (MCMF) problem. By applying the existing MCMF algorithm to solve the QADR problem, the second algorithm can produce the optimal solution to the QADR problem in polynomial time, but it takes more computational time than the first algorithm. Moreover, it is known that a cloud computing system usually has a large number of nodes. We also propose node combination techniques to reduce the possibly large data replication time. Finally, simulation experiments are performed to demonstrate the effectiveness of the proposed algorithms in the data replication and recovery.
Public
Auditing for Shared Data with Efficient User Revocation in the Cloud
Abstract—with data services in the cloud, users can easily modify
and share data as a group. To ensure data integrity can be audited publicly,
users need to compute signatures on all the blocks in shared data. Different
blocks are signed by different users due to data modifications performed by
different users. For security reasons, once a user is revoked from the group,
the blocks, which were previously signed by this revoked user, must be
re-signed by an existing user. The straightforward method, which allows an
existing user to download the corresponding part of shared data and re-sign it
during user revocation, is
Inefficient due to the large size of shared data in the cloud. In this
paper, we propose a novel public auditing mechanism for the integrity of shared
data with efficient user revocation in mind. By utilizing proxy re-signatures,
we allow the cloud to re-sign blocks on behalf of existing users during user
revocation, so that existing users do not need to download and re-sign blocks
by themselves. In addition, a public verifier is always able to audit the
integrity of shared data without retrieving the entire data from the cloud, even
if some part of shared data has been re-signed by the cloud. Experimental
results show that our mechanism can significantly improve the efficiency of
user revocation.
No comments:
Post a Comment