IEEE 2013: Property Analysis of XOR Based Visual Cryptography
IEEE 2013 Transactions on Image
Processing
Abstracts : A (k, n) Visual Cryptographic Scheme (VCS) encodes a secret
image into n shadow images (printed on Transparencies) distributed among n
participants. When any k participants superimpose their transparencies on an
overhead projector (OR operation), the secret image can be visually revealed by
human visual system without computation. However, the monotone property of OR
operation degrades the visual quality of reconstructed image for OR-based VCS
(OVCS). Accordingly, XOR-based VCS (XVCS), which uses XOR operation for
decoding, was proposed to enhance the contrast. In this paper, we investigate
the relation between OVCS and XVCS. Our main contribution is to theoretically
prove that the basis matrices of (k, n)-OVCS can be used in (k, n)-XVCS.
Meantime, the contrast is enhanced 21) times-(k
IEEE 2013
Transactions on Image Processing
Abstracts :The Visual Cryptography Scheme is a secure method that encrypts a
secret document or image by breaking it into shares. A distinctive property of
Visual Cryptography Scheme is that one can visually decode the secret image by
superimposing shares without computation. By taking the advantage of this
property, third person can easily retrieve the secret image if shares are
passing in sequence over the network. The project presents an approach for
encrypting visual cryptographically generated image shares using Public Key
Encryption. RSA algorithm is used for providing the double security of secret document.
Thus secret share are not available in their actual form for any alteration by
the adversaries who try to create fake shares. The scheme provides more secure
secret shares that are robust against a number of attacks & the system
provides a strong security for the handwritten text, images and printed
documents over the public network.
Abstracts :This
paper introduces a new exemplar-based in painting frame-work. A coarse
version of the input image is first in painted by a non-parametric patch
sampling. Compared to existing approaches, some improvements have been
done (e.g. filling order computation, combination of K nearest
neighbor). The in painted of a coarse version of the input image allows
to reduce the computational complexity, to be less sensitive to noise
and to work with the dominant orientations of image structures. From the
low-resolution in painted image, a single-image super-resolution is
applied to recover the details of missing areas. Experimental results on
natural images and texture synthesis demonstrate the effectiveness of
the proposed method
IEEE 2013 Transactions on Information Forensics and Security
Abstracts :Recently, more and
more attention is paid to reversible data hiding (RDH) in encrypted images,
since it maintains the excellent property that the original cover can be
losslessly recovered after embedded data is extracted while protecting the image
content’s confidentiality. All previous methods embed data by reversibly vacating
room from the encrypted images, which may be subject to some errors on data
extraction and/or image restoration. In this paper, we propose a novel method
by reserving room before encryption with a traditional RDH algorithm, and thus
it is easy for the data hider to reversibly embed data in the encrypted image.
The proposed method can achieve real reversibility, that is, data extraction
and image recovery are free of any error. Experiments show that this novel
method can embed more than 10 times as large payloads for the same image
quality as the previous methods, such as for PSNR dB
IEEE 2013 Transactions on Knowledge and Data Engineering
Graph-based ranking
models have been widely applied in information retrieval area. In this paper,
we focus on a well known graph-based model - the Ranking on Data Mani fold model,
or Manifold Ranking (MR). Particularly, it has been successfully applied to
content-based image retrieval, because of its outstanding ability to discover
underlying geometrical structure of the given image database. However, manifold
ranking is computationally very expensive, which significantly limits its
applicability to large databases especially for the cases that the queries are
out of the database (new samples). We propose a novel scalable graph-based
ranking model called Efficient Manifold Ranking (EMR), trying to address the
shortcomings of MR from two main perspectives: scalable graph construction and
efficient ranking computation. Specifically, we build an anchor graph on the
database instead of a traditional k-nearest neighbor graph, and design a new
form of adjacency matrix utilized to speed up the ranking. An approximate method
is adopted for efficient out-of-sample retrieval. Experimental results on some
large scale image databases demonstrate that EMR is a promising method for real
world retrieval applications.
IEEE 2013:
Steganography using Genetic Algorithm along with Visual Cryptography for
Wireless Network Application
Image Stenography is an emerging field of research for secure data hiding
and transmission over networks. The proposed system provides the best approach
for Least Significant Bit (LSB) based Stenography using Genetic Algorithm
(GA) along with Visual Cryptography (VC). Original message is converted into
cipher text by using secret key and then hidden into the LSB of original image.
Genetic Algorithm and Visual Cryptography has been used for enhancing the
security. Genetic Algorithm is used to modify the pixel location of stego image
and the detection of this message is complex. Visual Cryptography is used to
encrypt the visual information. It is achieved by breaking the image into two
shares based on a threshold. The performance of the proposed system is
experimented by performing steganalysis and conducting benchmarking test for
analysing the parameters like Mean Squared Error (MSE) and Peak Signal to Noise
Ratio (PSNR). The main aim of this paper is to design the enhanced secure
algorithm which uses both steganography using Genetic Algorithm and Visual
Cryptography to ensure improved security and reliability.
IEEE 2013: Scalable Face
Image Retrieval using Attribute-Enhanced Sparse Code words
IEEE 2013 Transactions on Multimedia
IEEE 2013 Transactions on Multimedia
Photos with people (e.g., family,
friends, celebrities, etc.) are the major interest of users. Thus, with the
exponentially growing photos, large-scale content-based face image retrieval is
an enabling technology for many emerging applications. In this work, we aim to
utilize automatically detected human attributes that contain semantic cues of
the face photos to improve content-based face retrieval by constructing
semantic code words for efficient large-scale face retrieval. By leveraging
human attributes in a scalable and systematic framework, we propose two
orthogonal methods named attribute-enhanced sparse coding and
attribute-embedded inverted indexing to improve the face retrieval in the
offline and online stages. We investigate the effectiveness of different
attributes and vital factors essential for face retrieval. Experimenting on two
public data sets, the results show that the proposed methods can achieve up to
43.5% relative improvement in MAP compared to the existing method
Visual cryptography is a secret
sharing scheme which uses images distributed as shares such that, when the
shares are superimposed, a hidden secret image is revealed. In extended visual
cryptography, the share images are constructed to contain meaningful cover
images, thereby providing opportunities for integrating visual cryptography and
biometric security techniques. In this paper, we propose a method for
processing halftone images that improves the quality of the share images and
the recovered secret image in an extended visual cryptography scheme for which
the size of the share images and there covered image is the same as for the
original halftone secret image. The resulting scheme maintains the perfect
security of the original extended visual cryptography approach
IEEE 2013 Transactions on
Communication Systems and Network Technologies
A novel image
encryption algorithm based on DNA sequence addition operation. This initiation
and increasing escalation of Internet has caused the information to be
paperless and the makeover into electronic compared to the conventional digital
image distribution. In this paper we proposed and implement four phase. First
phase, image is renovating into binary matrix. Afterward matrix is apportioning
into equal blocks. Second phase, each block is then encoded into DNA sequences
and DNA sequence addition operation used to add these blocks. For that result
of added matrix is achieved by using two Logistic maps. At the time of decoding
the DNA sequence matrix is complemented and we encrypt that result by using DES
then we get encrypted image. Our paper includes a novel encryption technique
for providing security to image. We have proposed an algorithm which is based
on suitable encryption method
IEEE 2013 Transactions on Power and Computing Technologies
Image
retrieval refers to extracting desired images from a large
database. The retrieval may be of text based or content based.
Here content based image retrieval (CBIR) is performed. CBIR is a
long standing research topic in the field of multimedia. Here
features such as texture & shape are analyzed. Gabor filter
is used to extract texture features from images. Morphological
c10sing operation combined with Gabor filter gives better retrieval
accuracy. The parameters considered are scale and orientation.
After applying Gabor filter on the image, texture features such
as mean and standard deviations are calculated. This forms the
feature vector. Shape feature is extracted by using Fourier
Descriptor and the centroid distance. In order to improve the
retrieval performance, combined texture and shape features are
utilized, because many features provide more information than
the single feature. The images are extracted based on their
Euclidean distance. The performance is evaluated using
precision-recall graph.
IEEE 2013 Transactions on multimedia
Community
question answering (cQA) services have gained popularity over the past
years. It not only allows community members to post and answer questions
but also enables general users to seek information from a comprehensive
set of well-answered questions. However, existing cQA forums usually
provide only textual answers, which are not informative enough for many
questions. In this paper, we propose a scheme that is able to en-rich
textual answers in cQA with appropriate media data. Our scheme consists
of three components: answer medium selection, query generation for
multimedia search, and multimedia data selection and presentation. This
approach automatically determines which type of media information should
be added for a textual answer. It then automatically collects data from
the web to enrich the answer. By processing a large set of QA pairs and
adding them to a pool, our approach can enable a novel multimedia
question answering (MMQA) approach as users can find multimedia answers
by matching their questions with those in the pool. Different from a lot
of MMQA research efforts that attempt to directly answer questions with
image and video data, our approach is built based on
community-contributed textual answers and thus it is able to deal
with more complex questions. We have conducted extensive experiments on a multi source QA data set. The results demonstrate the effectiveness of our approach
with more complex questions. We have conducted extensive experiments on a multi source QA data set. The results demonstrate the effectiveness of our approach
IEEE 2013
Transactions on Engineering Research & Technology
A
Web Usage Mining Approach Based On New Technique In Web
Path Recommendation Systems The Internet is one of the fastest growing
areas of
intelligence gathering. The ranking of
web page for the Web search-engine is one of the significant problems
at present. This leads to the important attention to the research
community. Web Perfecting is used to
reduce the access latency of the Internet. However, if most perfected
Web pages
are not visited by the users in their subsequent accesses, the limited
network bandwidth and server
resources will not be used efficiently and may worsen the access delay
problem. Therefore, it is critical that
we have an accurate prediction method during perfecting. To provide
prediction efficiently, we advance
architecture for predicting in Web
Usage Mining system and propose a novel approach for classifying user
navigation patterns for predicting users’ requests based on clustering
users
browsing behavior knowledge. The Excremental
results show that the approach can improve accuracy, precision, recall
and F measure of classification in the architecture
IEEE 2013: SUSIE: Search Using Services and Information Extraction
IEEE 2013 Transactions on Knowledge and Data Engineering
restricts
the types of queries that the service can answer. For example, a Web
service might provide a method that returns the songs of a given singer,
but it might not provide a method that returns the singers of a given
song. If the user asks for the singer of some specific song, then the
Web service cannot be called – even though the underlying database might
have the desired piece of information. This asymmetry is particularly
problematic if the service is used in a Web service orchestration
system. In this paper, we propose to use on-the-fly information
extraction to collect values that can be used as parameter bindings for
the Web service. We show how this idea can be integrated into a Web
service orchestration system. Our approach is fully implemented in a
prototype called SUSIE. We present experiments with real-life data and
services to demonstrate the practical viability and good performance of
our approach.
IEEE 2013:PMSE: A Personalized Mobile Search Engine
IEEE 2013 Transactions on Knowledge and Data Engineering
IEEE 2013 Transactions on Knowledge and Data Engineering
We
propose a personalized mobile search engine (PMSE) that captures the
users’ preferences in the form of concepts by mining their click through
data. Due to the importance of location information in mobile search,
PMSE classifies these concepts into content concepts and location
concepts. In addition, users’ locations (positioned by GPS) are used to
supplement the location concepts in PMSE. The user preferences are
organized in an ontology-based, multifacet user profile, which are used
to adapt a personalized ranking function for rank adaptation of future
search results. To characterize the diversity of the concepts associated
with a query and their relevance to the user’s need, four entropies are
introduced to balance the weights between the content and location
facets. Based on the client-server model, we also present a detailed
architecture and design for implementation of PMSE. In our design, the
client collects and stores locally the click through data to protect
privacy, whereas heavy tasks such as concept extraction, training, and
re ranking are performed at the PMSE server. Moreover, we address the
privacy issue by restricting the information in the user profile exposed
to the PMSE server with two privacy parameters. We prototype PMSE on
the Google Android platform. Experimental results show that PMSE
significantly improves the precision comparing to the baseline.
IEEE 2013 Transactions on Affective Computing
The
relationships between consumer emotions and their buying behaviors have
been well documented. Technology-savvy consumers often use the web to
find information on products and services before they commit to buying.
We propose a semantic web usage mining approach for discovering periodic
web access patterns from annotated web usage logs which incorporates
information on consumer emotions and behaviors through self-reporting
and behavioral tracking. We use fuzzy logic to represent real-life
temporal concepts (e.g., morning) and requested resource attributes
(ontological domain concepts for the requested URLs) of periodic
pattern-based web access activities. These fuzzy temporal and resource
representations, which contain both behavioral and emotional cues, are
incorporated into a Personal Web Usage Lattice that models the user’s
web access activities. From this, we generate a Personal Web Usage
Ontology written in OWL, which enables semantic web applications such as
personalized web resources recommendation. Finally, we demonstrate the
effectiveness of our approach by presenting experimental results in the
context of personalized web resources recommendation with varying
degrees of emotional influence. Emotional influence has been found to
contribute positively to adaptation in personalized recommendation
IEEE 2013 Transactions on Computers
Secure
distributed data storage can shift the burden of maintaining a large
number of files from the owner to proxy servers. Proxy servers can
convert encrypted files for the owner to encrypted files for the
receiver without the necessity of knowing the content of the original
files. In practice, the original files will be removed by the owner for
the sake of space efficiency. Hence, the issues on confidentiality and
integrity of the outsourced data must be addressed carefully. In this
paper, we propose two identity-based secure distributed data storage
(IBSDDS) schemes. Our schemes can capture the following properties: The
file owner can decide the access permission independently without the
help of the private key generator (PKG); For one query, a receiver can
only access one file, instead of all files of the owner; Our schemes are
secure against the collusion attacks, namely even if the receiver can
compromise the proxy servers, he cannot obtain the owner’s secret key.
Although the first scheme is only secure against the chosen plain text
attacks (CPA), the second scheme is secure against the chosen cipher
text attacks (CCA). To the best of our knowledge, it is the first IBSDDS
schemes where an access permissions is made by the owner for an exact
file and collusion attacks can be protected
in the standard model.
IEEE 2013 Transactions on Knowledge and Data Mining
Keyword
search has become a ubiquitous method for users to access text data in
the face of information explosion. Inverted lists are usually used to
index underlying documents to retrieve documents according to a set of
keywords efficiently. Since inverted lists are usually large, many
compression techniques have been proposed to reduce the storage space
and disk I/O time. However, these techniques usually perform
decompression operations on the fly, which increases the CPU time. This
paper presents a more efficient index structure, the Generalized
INverted IndeX (Ginix), which merges consecutive IDs in inverted lists
into intervals to save storage space. With this index structure, more
efficient algorithms can be devised to perform basic keyword search
operations, i.e., the union and the intersection operations, by taking
the advantage of intervals. Specifically, these algorithms do not
require conversions from interval lists back to ID lists. As a result,
keyword search using Ginix can be more efficient than those using
traditional inverted indices. The performance of Ginix is also improved
by reordering the documents in data sets using two scalable algorithms.
Experiments on the performance and scalability of Ginix on real data
sets show that Ginix not only requires less storage space, but also
improves the keyword search performance, compared with traditional
inverted indexes
IEEE 2013
Transactions on Computer Communication and Informatics
in
web-based e-learning environment every learner has a distinct
background, learning style and a specific goal when searching for
learning material on the web. The goal of personalization is to tailor
search results to a particular user based on that user’s contextual
information. The effectiveness of accessing learning material involves
two important challenges: identifying the user context and modeling the
user context as ontological profiles. This work describes the
ontology-based framework for context-aware adaptive learning system,
with detailed discussions on the categorization contextual information
and modeling along with the use of ontology to explicitly specify
learner context in an e-learning environment. Finally we conclude by
showing the applicability of the proposed ontology with appropriate
architectural overview of e-learning system
IEEE 2013 Transactions on Knowledge and Data Engineering
As
probabilistic data management is becoming one of the main re-search
focuses and keyword search is turning into a more popular query means,
it is natural to think how to support keyword queries on probabilistic
XML data. With regards to keyword query on De-terministic XML documents,
ELCA (Exclusive Lowest Common Ancestor) semantics allows more relevant
fragments rooted at the ELCAs to appear as results and is more popular
compared with other keyword query result semantics (such as SLCAs). In
this paper, we investigate how to evaluate ELCA results for keyword
queries on probabilistic XML documents. After defin-ing probabilistic
ELCA semantics in terms of possible world se-mantics, we propose an
approach to compute ELCA probabilities without generating possible
worlds. Then we develop an efficient stack-based algorithm that can find
all probabilistic ELCA results and their ELCA probabilities for a given
keyword query on a prob-abilistic XML document. Finally, we
experimentally evaluate the proposed ELCA algorithm and compare it with
its SLCA counter-part in aspects of result effectiveness, time and space
efficiency, and scalability
IEEE 2013:Transactions on Knowledge and Data Engineering
Generating
models from large data sets—and deter-mining which subsets of data to
mine—is becoming increasingly automated. However choosing what data to
collect in the first place requires human intuition or experience,
usually supplied by a domain expert. This paper describes a new approach
to machine science which demonstrates for the first time that
non-domain experts can collectively formulate features, and provide
values for those features such that they are predictive of some
behavioral outcome of interest. This was accomplished by building a web
platform in which human groups interact to both respond to questions
likely to help predict a behavioral outcome and pose new questions to
their peers. This results in a dynamically-growing online survey, but
the result of this cooperative behavior also leads to models that can
predict user’s outcomes based on their responses to the user-generated
survey questions. Here we describe two web-based experiments that
instantiate this approach: the first site led to models that can predict
users’ monthly electric energy consumption; the other led to models
that can predict users’ body mass index. As exponential increases in
content are often observed in successful online collaborative
communities, the proposed methodology may, in the future, lead to
similar exponential rises in discovery and insight into the causal
factors of behavioral outcomes
IEEE 2013 Transactions on
Knowledge and Data Engineering in cloud
A
large number of organizations today generate and share textual
descriptions of their products, services, and actions. Such collections
of textual data contain significant amount of struc-tured information,
which remains buried in the unstructured text. While information
extraction algorithms facilitate the extraction of structured relations,
they are often expensive and inaccurate, es-pecially when operating on
top of text that does not contain any instances of the targeted
structured information. We present a novel alternative approach that
facilitates the generation of the structured metadata by identifying
documents that are likely to contain informa-tion of interest and this
information is going to be subsequently useful for querying the
database. Our approach relies on the idea that hu-mans are more likely
to add the necessary metadata during creation time, if prompted by the
interface; or that it is much easier for humans (and/or algorithms) to
identify the metadata when such information actually exists in the
document, instead of naively prompting users to fill in forms with
information that is not available in the document. As a major
contribution of this paper, we present algorithms that identify
structured attributes that are likely to appear within the document, by
jointly utilizing the content of the text and the query workload. Our
experimental evaluation shows that our approach generates superior
results compared to approaches that rely only on the textual content or
only on the query workload, to identify attributes of interest
IEEE 2013 Transactions on Parallel and Distributed System
Abstract—In this paper, we consider the issue of data broadcasting in
mobile social Networks (MSNets). The objective is to broadcast data from a
super user to other users in the network. There are two main challenges under
this paradigm, namely, how to represent
and characterize user mobility in realistic MSN ets; given the knowledge of
regular users’ movements, how to design an efficient super user route to
broadcast data actively. We first explore several realistic data sets to reveal both
geographic and social regularities of human mobility, and further propose the
concepts of Geo-community and Geo-centrality into MSNet analysis. Then, we
employ a semi-Markov process to model user mobility based on the Geo-community
structure of the network. Correspondingly, the Geo-centrality indicating the
“dynamic user density” of each Geo-community can be derived from the
semi-Markov model. Finally, considering the Geo-centrality information, we
provide different route algorithms to cater to the superuser that wants to
either minimize total duration or maximize dissemination ratio. To the best of
our knowledge, this work is the first to study data broadcasting in a realistic
MSNet setting. Extensive trace-driven simulations show that our approach consistently
outperforms other existing super user route design algorithms in terms of
dissemination ratio and energy efficiency
IEEE 2013 Transactions on Mobile Computing
Abstract—This paper introduces cooperative caching policies for
minimizing electronic content provisioning cost in Social Wireless Networks
(SWNET). SWNETs are formed by mobile devices, such as data enabled phones,
electronic book readers etc., sharing common interests in electronic content,
and physically gathering together in public places. Electronic object caching
in such SWNETs are shown to be able to reduce the content provisioning cost
which depends heavily on the service and pricing dependence among various
stakeholders including content providers (CP), network service providers, and
End Consumers (EC). Drawing motivation from Amazon’s Kindle electronic book
delivery business, this paper develops practical network, service, and pricing
models which are then used for creating two object caching strategies for minimizing
content provisioning costs in networks with homogenous and heterogeneous object
demands. The paper constructs analytical and simulation models for analyzing
the proposed caching strategies in the presence of selfish users that deviate
from network-wide cost-optimal policies. It also reports results from an
Android phone-based prototype SWNET, validating the presented analytical and
simulation results.
IEEE 2013: CPU
Scheduling for Power/Energy Management on Multi core Processors Using Cache
Miss and Context Switch Data
IEEE 2013 Transactions on Parallel and Distributed System
Abstract— Power and energy have become increasingly important concerns in
the design and implementation of today’s multi core/many core chips. In this
paper we present two priority-based CPU scheduling algorithms, Algorithm Cache
Miss Priority CPU Scheduler (CM−PCS) and Algorithm Context Switch Priority CPU
Scheduler(CS−PCS), which take advantage of often ignored dynamic performance
data, in order to reduce power consumption by over 20% with a significant
increase in performance. Our algorithms utilize Linux cpu sets and cores
operating at different fixed frequencies. Many other techniques, including
dynamic frequency scaling, can lower a core’s frequency during the execution of
a non-CPU intensive task, thus lowering performance. Our algorithms match
processes to cores better suited to execute those processes in an effort to
lower the average completion time of all processes in an entire task, thus
improving performance. They also consider a process’s cache miss/cache
reference ratio, number of context switches and CPU migrations, and system
load. Finally, our algorithms use dynamic process priorities as scheduling
criteria. We have tested our algorithms using a real AMD Opteron 6134 multi
core chip and measured results directly using the “Kill A Watt” meter, which samples
power periodically during execution. Our results show not only a power
(energy/execution time) savings of 39 watts (21.43%) and 38 watts (20.88%), but
also a significant improvement in the performance, performance per watt, and
execution time ·watt (energy) for a task consisting of twenty-four concurrently
executing benchmarks, when compared to the default Linux scheduler and CPU
frequency scaling governor.
IEEE 2013: DCIM:
Distributed Cache Invalidation Method for Maintaining Cache Consistency in
Wireless Mobile Networks
IEEE 2013 Transactions on Mobile Computing
Abstract—This paper proposes distributed cache invalidation mechanism
(DCIM), a client-based cache consistency scheme that is implemented on top of a
previously proposed architecture for caching data items in mobile ad hoc
networks (MANETs), namely COACS, where special nodes cache the queries and the
addresses of the nodes that store the responses to these queries. We have also
previously proposed a server-based consistency scheme, named SSUM, whereas in
this paper, we introduce DCIM that is totally client-based. DCIM is a
pull-based algorithm that implements adaptive time to live (TTL), piggybacking,
and perfecting, and provides near strong consistency capabilities. Cached data
items are assigned adaptive TTL values that correspond to their update rates at
the data source, where items with expired TTL values are grouped in validation
requests to the data source to refresh them, whereas unexpired ones but with
high request rates are prefetched from the server. In this paper, DCIM is
analyzed to assess the delay and bandwidth gains (or costs) when compared to
polling every time and push-based schemes. DCIM was also implemented using ns2,
and compared against client-based and server-based schemes to assess its
performance experimentally. The consistency ratio, delay, and overhead traffic
are reported versus several variables, where DCIM showed to be superior when
compared to the other systems.
IEEE 2013 Transactions on Parallel and Distributed System
Wireless spoofing attacks are easy to launch and can significantly
impact the performance of networks. Although the identity of a node can be
verified through cryptographic authentication, conventional security approaches
are not always desirable because of their overhead requirements. In this paper,
we propose to use spatial information, a physical property associated with each
node, hard to falsify, and not reliant on cryptography, as the basis for
detecting spoofing attacks; determining the number of attackers when
multiple adversaries masquerading as the same node identity; and localizing
multiple adversaries. We propose to use the spatial correlation of received
signal strength (RSS) inherited from wireless nodes to detect the spoofing
attacks. We then formulate the problem of determining the number of attackers
as a multi class detection problem. Cluster-based mechanisms are developed to
determine the number of attackers. When the training data are available, we
explore using the Support Vector Machines (SVM) method to further improve the
accuracy of determining the number of attackers. In addition, we developed an
integrated detection and localization system that can localize the positions of
multiple attackers. We evaluated our techniques through two test beds using
both an 802.11 (WiFi) network and an 802.15.4 (ZigBee) network in two real
office buildings. Our experimental results show that our proposed methods can
achieve over 90 percent Hit Rate and Precision when determining the number of
attackers. Our localization results using a representative set of algorithms
provide strong evidence of high accuracy of localizing multiple adversaries
IEEE 2013 Transactions on Industrial Electronics
Abstract—The migration to wireless
network from wired net-work has been a global trend in the past few decades.
The mobility and scalability brought by wireless network made it possible in
many applications. Among all the contemporary wireless net-works, Mobile Ad hoc
NET work (MANET) is one of the most important and unique applications. On the
contrary to traditional network architecture, MANET does not require a fixed
network infrastructure; every single node works as both a transmitter and a
receiver. Nodes communicate directly with each other when they are both within
the same communication range. Otherwise, they rely on their neighbors to relay
messages. The self-configuring ability of nodes in MANET made it popular among
critical mission applications like military use or emergency recovery. However,
the open medium and wide distribution of nodes make MANET vulnerable to
malicious attackers. In this case, it is crucial to develop efficient
intrusion-detection mechanisms to protect MANET from attacks. With the
improvements of the technology and cut in hardware costs, we are witnessing a
current trend of expanding MANETs into industrial applications. To adjust to such
trend, we strongly believe that it is vital to address its potential security
issues. In this paper, we propose and implement a new intrusion-detection
system named Enhanced Adaptive ACKnowl-edgment (EAACK) specially designed for
MANETs. Compared to contemporary approaches, EAACK demonstrates higher
mali-cious-behavior-detection rates in certain circumstances while does not
greatly affect the network performances.
IEEE 2013 Transactions on Mobile Computing
Abstract
- Vehicular Ad Hoc Networks (VANETs) adopt the Public Key
Infrastructure (PKI) and Certificate Revocation Lists (CRLs) for their
security. In any PKI system, the authentication of a received message is
performed by checking if the certificate of the sender is included in the
current CRL, and verifying the authenticity of the certificate and signature of
the sender. In this paper, we propose an Expedite Message Authentication
Protocol (EMAP) for VANETs, which replaces the time-consuming CRL checking
process by an efficient revocation checking process. The revocation check
process in EMAP uses a keyed Hash Message Authentication Code (HMAC), where the
key used in calculating the HMAC is shared only between non-revoked On-Board
Units (OBUs). In addition, EMAP uses a novel probabilistic key distribution,
which enables non-revoked OBUs to securely share and update a secret key. EMAP
can significantly decrease the message loss ratio due to the message
verification delay compared with the conventional authentication methods
employing CRL. By conducting security analysis and performance evaluation, EMAP
is demonstrated to be secure and efficient. Index Terms - Vehicular networks, Communication security,
Message authentication, Certificate revocation.
IEEE 2013 Transactions on Computers
Abstract—Mobile
social networks (MSNs) are a kind of delay tolerant network that consists of
lots of mobile nodes with social characteristics. Recently, many social-aware
algorithms have been proposed to address routing problems in MSNs. However, these
algorithms tend to forward messages to the nodes with locally optimal social
characteristics, and thus cannot achieve the optimal performance. In this
paper, we propose a distributed optimal Community-Aware Opportunistic Routing
(CAOR) algorithm. Our main contributions are that we propose a home-aware
community model, whereby we turn an MSN into a network that only includes
community homes. We prove that, in the network of community homes, we still can
compute the minimum expected delivery delays of nodes through a reverse
Dijkstra algorithm and achieve the optimal opportunistic routing performance.
Since the number of communities is far less than the number of nodes in
magnitude, the computational cost and maintenance cost for contact information are greatly reduced. We
demonstrate how our algorithm significantly out performs the previous ones
through extensive simulations, based on a real MSN trace and a synthetic MSN
trace.
IEEE 2013: Redundancy Management of Multipath Routing for Intrusion Tolerance in Heterogeneous Wireless Sensor Networks
IEEE 2013 Transactions on Network and Service Management
Abstract—In this
paper we propose redundancy management of heterogeneous wireless sensor
networks (HWSNs), utilizing multipath routing to answer user queries in the
presence of unreliable and malicious nodes. The ke concept of our redundancy
management is to exploit the tradeoff between energy consumption vs. the gain
in reliability, timeliness, and security to maximize the system useful
lifetime. We formulate the tradeoff as an optimization problem for dynamically
determining the best redundancy level to apply to multipath routing for
intrusion tolerance so that the query response success probability is maximized
while prolonging the useful lifetime.
Furthermore, we consider this optimization problem for the case in which
a voting-based distributed intrusion detection algorithm is applied to detect
and evict malicious nodes in a HWSN. We develop a novel probability model to
analyze the best redundancy level in terms of path redundancy and source
redundancy, as well as the best intrusion detection settings in terms of the
number of voters and the intrusion invocation interval under which the lifetime
of a HWSN is maximized. We then apply the analysis results obtained to the
design of a dynamic redundancy management algorithm to identify and apply the
best design parameter settings at runtime in response to environment changes,
to maximize the HWSN lifetime.
IEEE 2013 Transactions on Networking
Virtualized
cloud-based services can take advantage of statistical multiplexing across
applications to yield significant cost savings to the operator. However,
achieving similar benefits with real-time services can be a challenge. In this
paper, we seek to lower a provider’s costs of real-time IPTV services through a
virtualized IPTV architecture and through intelligent time-shifting of service
delivery. We take advantage of the differences in the deadlines associated with
Live TV versus Video-on-Demand (VoD) to
effectively multiplex these services. We provide a generalized framework for
computing the amount of resources needed to support multiple services, without missing
the deadline for any service. We construct the problem as an optimization
formulation that uses a generic cost function. We consider multiple forms for
the cost function (e.g., maximum, convex and concave functions) to reflect the
different pricing options. The solution to this formulation gives the number of
servers needed at different time instants to support these services. We
implement a simple mechanism for time-shifting scheduled jobs in a simulator
and study the reduction in server load using real traces from an operational
IPTV network. Our results show that we are able to reduce the load by ∼ 24%
(compared to a possible ∼ 31%). We also show that there are interesting open problems
in designing mechanisms that allow time-shifting of load in such environments.
IEEE 2013 Transactions on Parallel and Distributed System
Energy saving is an important issue in Mobile Ad Hoc Networks (MANETs).
Recent studies show that network coding can help reduce the energy consumption
in MANETs by using less transmission. However, apart from transmission cost,
there are other sources of energy consumption, e.g., data
encryption/decryption. In this paper, we study how to leverage network coding
to reduce the energy consumed by data encryption in MANETs. It is interesting
that network coding has a nice property of intrinsic security, based on which
encryption can be done quite efficiently. To this end, we propose P-Coding, a
lightweight encryption scheme to provide confidentiality for network-coded
MANETs in an energy-efficient way. The basic idea of P-Coding is to let the
source randomly permutes the symbols of each packet (which is prefixed with its
coding vector), before performing network coding operations. Without knowing
the permutation, eavesdroppers cannot locate coding vectors for correct
decoding, and thus cannot obtain any meaningful information. We demonstrate
that due to its lightweight nature, P-Coding incurs minimal energy consumption
compared to other encryption schemes.
IEEE 2013 Transactions on Mobile Computing
Today’s location-sensitive service relies on user’s mobile device to determine the current location. This allows malicious users to access a restricted resource or provide bogus alibis by cheating on their locations. To address this issue, we propose A Privacy-Preserving LocAtion proof Updating System (APPLAUS) in which colocated Bluetooth enabled mobile devices mutually generate location proofs and send updates to a location proof server. Periodically changed pseudonyms are used by the mobile devices to protect source location privacy from each other, and from the untrusted location proof server. We also develop user-centric location privacy model in which individual users evaluate their location privacy levels and decide whether and when to accept the location proof requests. In order to defend against colluding attacks, we also present betweenness ranking-based and correlation clustering-based approaches for outlier detection. APPLAUS can be implemented with existing network infrastructure, and can be easily deployed in Bluetooth enabled mobile devices with little computation or power cost. Extensive experimental results show that APPLAUS can effectively provide location proofs, significantly preserve the source location privacy, and effectively detect colluding attacks.
Today’s location-sensitive service relies on user’s mobile device to determine the current location. This allows malicious users to access a restricted resource or provide bogus alibis by cheating on their locations. To address this issue, we propose A Privacy-Preserving LocAtion proof Updating System (APPLAUS) in which colocated Bluetooth enabled mobile devices mutually generate location proofs and send updates to a location proof server. Periodically changed pseudonyms are used by the mobile devices to protect source location privacy from each other, and from the untrusted location proof server. We also develop user-centric location privacy model in which individual users evaluate their location privacy levels and decide whether and when to accept the location proof requests. In order to defend against colluding attacks, we also present betweenness ranking-based and correlation clustering-based approaches for outlier detection. APPLAUS can be implemented with existing network infrastructure, and can be easily deployed in Bluetooth enabled mobile devices with little computation or power cost. Extensive experimental results show that APPLAUS can effectively provide location proofs, significantly preserve the source location privacy, and effectively detect colluding attacks.
IEEE 2013 Transactions on Mobile Computing
In
this paper, we give a
global perspective of multicast capacity and delay analysis in Mobile Ad
Hoc
Networks (MANETs). Specifically, we consider four node mobility models:
two-dimensional
i.i.d. mobility, wo-dimensional hybrid random walk, one-dimensional
i.i.d.
mobility, and one-dimensional hybrid random walk. Two mobility
time-scales are
investigated in this paper: Fast
mobility where node mobility is at the same time-scale as data
transmissions; Slow mobility where node mobility is assumed to occur at
a much slower time-scale than
data transmissions. Given a delay constraint D, we first characterize
the
optimal multicast capacity for each of the eight types of mobility
models, and
then we develop a scheme that can achieve a capacity-delay tradeoff
close to
the upper bound up to a logarithmic factor. In addition, we also study
heterogeneous networks with infrastructure support.
IEEE 2013: NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems
IEEE 2013 Transactions on Dependable and Secure Computing
Cloud
security is one of most important issues that has attracted a lot of
research and development effort in past few years. Particularly,
attackers can explore vulnerabilities of a cloud system and compromise
virtual machines to deploy further large-scale Distributed
Denial-of-Service (DDoS). DDoS attacks usually involve early stage
actions such as multistep exploitation, low-frequency vulnerability
scanning, and compromising identified vulnerable virtual machines as
zombies, and finally DDoS attacks through the compromised zombies.
Within the cloud system, especially the Infrastructure-as-a-Service
(IaaS) clouds, the detection of zombie exploration attacks is extremely
difficult. This is because cloud users may install vulnerable
applications on their virtual machines. To prevent vulnerable virtual
machines from being compromised in the cloud, we propose a multiphase
distributed vulnerability detection, measurement, and countermeasure
selection mechanism called NICE, which is built on attack graph-based
analytical models and reconfigurable virtual network-based
countermeasures. The proposed framework leverages Open Flow network
programming APIs to build a monitor and control plane over distributed
programmable virtual switches to significantly improve attack detection
and mitigate attack consequences. The system and security evaluations
demonstrate the efficiency and effectiveness of the proposed solution
IEEE 2013 Transactions on Networking
Participatory
Sensing is an emerging computing paradigm that enables the distributed
collection of data by self-selected participants. It allows the
increasing number of mobile phone users to share local knowledge
acquired by their sensor-equipped devices, e.g., to monitor temperature,
pollution level or consumer pricing information. While research
initiatives and prototypes proliferate, their real-world impact is often
bounded to comprehensive user participation. If users have no
incentive, or feel that their privacy might be endangered, it is likely
that they will not participate. In this article, we focus on privacy
protection in Participatory Sensing and introduce a suitable
privacy-enhanced infrastructure. First, we provide a set of definitions
of privacy requirements for both data producers (i.e., users providing
sensed information) and consumers (i.e., applications accessing the
data). Then, we propose an efficient solution designed for mobile phone
users, which incurs very low overhead. Finally, we discuss a number of
open problems and possible research directions
IEEE 2013 Transactions on Mobile Computing
Passive
monitoring utilizing distributed wireless sniffers is an effective
technique to monitor activities in wireless infrastructure networks for
fault diagnosis, resource management and critical path analysis. In
this paper, we introduce a quality of monitoring (QoM) metric defined by
the expected number of active users monitored, and investigate the
problem of maximizing QoM by judiciously assigning sniffers to channels
based on the knowledge of user activities in a multi-channel wireless
network. Two types of capture models are considered. The user-centric
model assumes frame-level capturing capability of sniffers such that the
activities of different users can be distinguished while the
sniffer-centric model only utilizes the binary channel information
(active or not) at a sniffer. For the user-centric model, we show that
the implied optimization problem is NP-hard, but a constant
approximation ratio can be attained via polynomial complexity
algorithms. For the sniffer-centric model, we devise stochastic
inference schemes to transform the problem into the user-centric domain,
where we are able to apply our polynomial approximation algorithms. The
effectiveness of our proposed schemes and algorithms is further
evaluated using both synthetic data as well as real-world traces from an
operational WLAN.
IEEE
2013: SinkTrail: A Proactive Data Reporting Protocol for Wireless
Sensor Networks
IEEE 2013 Transactions on Computers
In
large-scale wireless sensor networks, leveraging data sinks’ mobility
for data gathering has drawn substantial interests in recent years.
Current researches either focus on planning a mobile sink’s moving
trajectory in advance to achieve optimized network performance, or
target at collecting a small portion of sensed data in the network. In
many application scenarios, however, a mobile sink cannot move freely in
the deployed area. Therefore, the per-calculated trajectories may not
be applicable. To avoid constant sink location update traffics when a
sink’s future locations cannot be scheduled in advance, we propose two
energy-efficient proactive data reporting protocols, SinkTrail and
SinkTrail-S, for mobile sink based data collection. The proposed
protocols feature low-complexity and reduced control overheads. Two
unique aspects distinguish our approach from previous ones we allow
sufficient flexibility in the movement of mobile sinks to dynamically
adapt to various terrestrial changes; and without requirements of GPS
devices or predefined landmarks, SinkTrail establishes a logical
coordinate system for routing and forwarding data packets, making it
suitable for diverse application scenarios. We systematically analyze
the impact of several design factors in the proposed algorithms. Both
theoretical analysis and simulation results demonstrate that the
proposed algorithms reduce control overheads and yield satisfactory
performance in finding shorter routing paths
performance in finding shorter routing paths
IEEE 2013 Transactions on Mobile Computing
In
certain applications, the locations of events reported by a sensor
network need to remain anonymous. That is, unauthorized observers must
be unable to detect the origin of such events by analyzing the network
traffic. Known as the source anonymity problem, this problem has emerged
as an important topic in the security of wireless sensor networks, with
variety of techniques based on different adversarial assumptions being
proposed. In this work, we present a new framework for modeling,
analyzing and evaluating anonymity in sensor networks. The novelty of
the proposed framework is twofold: first, it introduces the notion of
“interval indistinguishably” and provides a quantitative measure to
model anonymity in wireless sensor networks; second, it maps source
anonymity to the statistical problem of binary hypothesis testing with
nuisance parameters. We then analyze existing solutions for designing
anonymous sensor networks using the proposed model. We show how mapping
source anonymity to binary hypothesis testing with nuisance parameters
leads to converting the problem of exposing private source information
into searching for an appropriate data transformation that removes or
minimize the effect of the nuisance information. By doing so, we
transform the problem from analyzing real-valued sample points to binary
codes, which opens the door for coding theory to be incorporated into
the study of anonymous sensor networks. Finally, we discuss how existing
solutions can be modified to improve their anonymity
IEEE 2013 Transactions on Parallel and Distributed System
Abstract—In this paper, we consider the issue of data broadcasting in
mobile social Networks (MSNets). The objective is to broadcast data from a
super user to other users in the network. There are two main challenges under
this paradigm, namely, how to represent
and characterize user mobility in realistic MSN ets; given the knowledge of
regular users’ movements, how to design an efficient super user route to
broadcast data actively. We first explore several realistic data sets to reveal both
geographic and social regularities of human mobility, and further propose the
concepts of Geo-community and Geo-centrality into MSNet analysis. Then, we
employ a semi-Markov process to model user mobility based on the Geo-community
structure of the network. Correspondingly, the Geo-centrality indicating the
“dynamic user density” of each Geo-community can be derived from the
semi-Markov model. Finally, considering the Geo-centrality information, we
provide different route algorithms to cater to the superuser that wants to
either minimize total duration or maximize dissemination ratio. To the best of
our knowledge, this work is the first to study data broadcasting in a realistic
MSNet setting. Extensive trace-driven simulations show that our approach consistently
outperforms other existing super user route design algorithms in terms of
dissemination ratio and energy efficiency
IEEE 2013: Privacy-assured Outsourcing of Image Reconstruction Service in Cloud
IEEE 2013 Transaction on Emerging Topics in Computing
Large-scale
image data sets are being exponentially generated today. Along with such data
explosion is the fast growing trend to outsource the image management systems
to the cloud for its abundant computing resources and benefits. However, how to
protect the sensitive data while enabling outsourced image services becomes a
major concern. To address these challenges, we propose OIRS, a novel outsourced
image recovery service architecture, which exploits different domain
technologies and takes security, efficiency, and design complexity into
consideration from the very beginning of the service flow. Specifically, we
choose to design OIRS under the compressed sensing (CS) framework, which is
known for its simplicity of unifying the traditional sampling and compression
for image acquisition. Data owners only need to outsource compressed image
samples to cloud for reduced storage overhead. Besides, in OIRS, data users can
harness the cloud to securely reconstruct images without revealing information
from either the compressed image samples or the underlying image content. We
start with the OIRS design for sparse data, which is the typical application
scenario for compressed sensing, and then show its natural extension to the
general data for meaningful tradeoffs between efficiency and accuracy. We
thoroughly analyse the privacy-protection of OIRS and conduct extensive
experiments to demonstrate the system effectiveness and efficiency. For
completeness, we also discuss the expected performance speedup of OIRS through
hardware built-in system design.
IEEE 2013 :Enabling Data Dynamic and Indirect Mutual Trust for Cloud Computing Storage Systems
IEEE 2013 Transaction on Parallel and Distributed Systems
Currently,
the amount of sensitive data produced by many organizations is outpacing their
storage ability. The management of such huge amount of data is quite expensive
due to the requirements of high storage capacity and qualified personnel.
Storage-as-a-Service (SaaS) offered by cloud service providers (CSPs) is a paid
facility that enables organizations to outsource their data to be stored on
remote servers. Thus, SaaS reduces the maintenance cost and mitigates the burden
of large local data storage at the organization’s end. A data owner pays for a
desired level of security and must get some compensation in case of any
misbehavior committed by the CSP. On the other hand, the CSP needs a protection
from any false accusation that may be claimed by the owner to get illegal
compensations. In this paper, we propose a cloud-based storage scheme that
allows the data owner to benefit from the facilities offered by the CSP and
enables indirect mutual trust between them. The proposed scheme has four important
features: it allows the owner to
outsource sensitive data to a CSP, and perform full block-level dynamic
operations on the outsourced data, i.e., block modification, insertion,
deletion, and append, it ensures that authorized users (i.e., those who have
the right to access the owner’s file) receive the latest version of the
outsourced data, it enables indirect mutual trust between the owner and the
CSP, and it allows the owner to grant or revoke access to the outsourced data.
We discuss the security issues of the proposed scheme. Besides, we justify its
performance through theoretical analysis and experimental evaluation of
storage, communication, and computation overheads.
IEEE 2013 :Attribute-Based Encryption with Verifiable Outsourced Decryption
IEEE 2013 Transactions on Information Forensics and Security
Attribute-based
encryption (ABE) is a public-key-based one-to-many encryption that allows users
to encrypt and decrypt data based on user attributes. A promising application of
ABE is flexible access control of encrypted data stored in the cloud, using
access polices and ascribed attributes associated with private keys and cipher texts.
One of the main efficiency drawbacks of the existing ABE schemes is that
decryption involves expensive pairing operations and the number of such
operations grows with the complexity of the access policy. Recently, Greenetal.
proposed an ABE system with outsourced decryption that largely elimi-nates the
decryption overhead for users. In such a system, a user provides an un trusted
server, say a cloud service provider, with a transformation key that allows the
cloud to translate any ABE cipher text satisfied by that user’s attributes or
access policy into a simple cipher text, and it only incurs a small
computational over-head for the user to recover the plain text from the
transformed cipher text. Security of an ABE system with outsourced decryption ensures
that an adversary (including a malicious cloud) will not be able to learn
anything about the encrypted message; however, it does not guarantee the
correctness of the transformation done by the cloud. In this paper, we consider
a new requirement of ABE with outsourced decryption: verifiability. Informally,
verifiability guarantees that a user can efficiently check if the
transformation is done correctly. We give the formal model of ABE with
verifiable outsourced decryption and propose a concrete scheme. We prove that
our new scheme is both secure and verifiable, without relying on random
oracles. Finally, we show an implementation of our
IEEE 2013: Towards Differential Query Services in Cost-Efficient Clouds
IEEE 2013 Transactions on Parallel and Distributed Systems
Cloud computing as an emerging technology trend is expected to reshape the advances in information technology. In a cost-efficient cloud environment, a user can tolerate a certain degree of delay while retrieving information from the cloud to reduce costs. In this paper, we address two fundamental issues in such an environment: privacy and efficiency. We first review a private keyword-based file retrieval scheme that was originally proposed by Ostrovsky. Their scheme allows a user to retrieve files of interest from an un trusted server without leaking any information. The main drawback is that it will cause a heavy querying overhead incurred on the cloud, and thus goes against the original intention of cost efficiency. In this paper, we present a scheme, termed efficient information retrieval for ranked query (EIRQ), based on an aggregation and distribution layer (ADL), to reduce querying overhead incurred on the cloud. In EIRQ, queries are classified into multiple ranks, where a higher ranked query can retrieve a higher percentage of matched files. A user can retrieve files on demand by choosing queries of different ranks. This feature is useful when there are a large number of matched files, but the user only needs a small subset of them. Under different parameter settings, extensive evaluations have been conducted on both analytical models and on a real cloud environment, in order to examine the effectiveness of our schemes.
Cloud computing as an emerging technology trend is expected to reshape the advances in information technology. In a cost-efficient cloud environment, a user can tolerate a certain degree of delay while retrieving information from the cloud to reduce costs. In this paper, we address two fundamental issues in such an environment: privacy and efficiency. We first review a private keyword-based file retrieval scheme that was originally proposed by Ostrovsky. Their scheme allows a user to retrieve files of interest from an un trusted server without leaking any information. The main drawback is that it will cause a heavy querying overhead incurred on the cloud, and thus goes against the original intention of cost efficiency. In this paper, we present a scheme, termed efficient information retrieval for ranked query (EIRQ), based on an aggregation and distribution layer (ADL), to reduce querying overhead incurred on the cloud. In EIRQ, queries are classified into multiple ranks, where a higher ranked query can retrieve a higher percentage of matched files. A user can retrieve files on demand by choosing queries of different ranks. This feature is useful when there are a large number of matched files, but the user only needs a small subset of them. Under different parameter settings, extensive evaluations have been conducted on both analytical models and on a real cloud environment, in order to examine the effectiveness of our schemes.
IEEE 2013: EMAP: Expedite Message Authentication Protocol for Vehicular Ad Hoc Networks
IEEE 2013 Transactions on Mobile Computing
Abstract
- Vehicular
Ad Hoc Networks (VANETs) adopt the Public Key Infrastructure (PKI) and
Certificate Revocation Lists (CRLs) for their security. In any PKI system, the
authentication of a received message is performed by checking if the
certificate of the sender is included in the current CRL, and verifying the authenticity
of the certificate and signature of the sender. In this paper, we propose an
Expedite Message Authentication Protocol (EMAP) for VANETs, which replaces the
time-consuming CRL checking process by an efficient revocation checking
process. The revocation check process in EMAP uses a keyed Hash Message
Authentication Code (HMAC), where the key used in calculating the HMAC is shared
only between non-revoked On-Board Units (OBUs). In addition, EMAP uses a novel
probabilistic key distribution, which enables non-revoked OBUs to securely
share and update a secret key. EMAP can significantly decrease the message loss
ratio due to the message verification delay compared with the conventional
authentication methods employing CRL. By conducting security analysis and
performance evaluation, EMAP is demonstrated to be secure and efficient.
IEEE 2013: CLOUD COMPUTING FOR MOBILE USERS: CAN OFFLOADING COMPUTATION SAVE ENERGY?
IEEE 2013 TRANSACTIONS ON CLOUD COMPUTING
Cloud computing1 is a new paradigm in which computing resources such as
processing, memory, and storage are not physically pres-ent at the user’s
location. Instead, a service provider owns and manages these resources, and
users access them via the Internet. For example, Amazon Web Services lets users
store personal data via its Simple Storage Service (S3) and perform
computations on stored data using the Elastic Compute Cloud (EC2). This type of
computing provides many advantages for businesses—including low initial capital
investment, shorter start-up time for new services, lower maintenance and operation
costs, higher utilization through virtual-ization, and easier disaster
recovery—that make cloud computing an attractive option. Reports suggest that
there are several benefits in shifting computing from the desktop to the
cloud.1,2 What about cloud computing for mobile users? The primary constraints
for mobile computing are limited energy and wireless bandwidth. Cloud computing
can provide energy savings as a service to mobile users, though it also poses
some unique challenges.
Abstract—The rapidly increasing power of personal mobile
devices (smart phones, tablets, etc.) is providing much richer contents and
social interactions to users on the move. This trend however is throttled by
the limited battery lifetime of mobile devices and unstable wireless
connectivity, making the highest possible quality of service experienced by
mobile users not feasible. The recent cloud computing technology, with its rich
resources to compensate for the limitations of mobile devices and connections,
can potentially provide an ideal platform to support the desired mobile
services. Tough challenges arise on how to effectively exploit cloud resources
to facilitate mobile services, especially those with stringent interaction
delay requirements. In this paper, we propose the design of a Cloud-based,
novel Mobile social tV system (CloudMoV). The system effectively utilizes both
PaaS (Platform-as-a-Service) and IaaS (Infrastructure-asa- Service) cloud
services to offer the living-room experience of video watching to a group of
disparate mobile users who can interact socially while sharing the video. To
guarantee good streaming quality as experienced by the mobile users with time
varying wireless connectivity, we employ a surrogate for each user in the IaaS
cloud for video downloading and social exchanges on behalf of the user. The
surrogate performs efficient stream trans coding that matches the current
connectivity quality of the mobile user. Given the battery life as a key
performance bottleneck, we advocate the use of burst transmission from the
surrogates to the mobile users, and carefully decide the burst size which can
lead to high energy efficiency and streaming quality. Social interactions among
the users, in terms of spontaneous textual exchanges, are effectively achieved
by efficient designs of data storage with BigTable and dynamic handling of
large volumes of concurrent messages in a typical PaaS cloud. These various
designs for flexible trans coding capabilities, battery efficiency of mobile
devices and spontaneous social interactivity together provide an ideal platform
for mobile social TV services. We have implemented CloudMoV on Amazon EC2 and
Google App Engine and verified its superior performance based on real world experiments.
IEEE 2013: On Quality of Monitoring for Multi-channel Wireless Infrastructure Networks
IEEE 2013 TRANSACTION ON MOBILE COMPUTING
Abstract—Passive monitoring utilizing distributed wireless sniffers is an
effective technique to monitor activities in wireless infrastructure networks
for fault diagnosis, resource management and critical path analysis. In this
paper, we introduce a quality of monitoring (QoM) metric defined by the
expected number of active users monitored, and investigate the problem of
maximizing QoM by judiciously assigning sniffers to channels based on the
knowledge of user activities in a multi-channel wireless network. Two types of
capture models are considered. The user-centric
model assumes frame-level capturing capability of sniffers such that the
activities of different users can be distinguished while the sniffer-centric model only
utilizes the binary channel information (active or not) at a sniffer. For the
user-centric model, we show that the implied optimization problem is NP-hard,
but a constant approximation ratio can be attained via polynomial complexity
algorithms. For the sniffer-centric model, we devise stochastic inference
schemes to transform the problem into the user-centric domain, where we are
able to apply our polynomial approximation algorithms. The effectiveness of our
proposed schemes and algorithms is further evaluated using both synthetic data
as well as real-world traces from an operational WLAN.
IEEE 2013: Optimal Multicast Capacity and Delay Tradeoffs in MANETs
IEEE
2013 TRANSACTIONS ON MOBILE COMPUTING
Abstract—In this paper, we give a global perspective of multicast capacity and
delay analysis in Mobile Ad Hoc Networks (MANETs). Specifically, we consider
four node mobility models: two-dimensional i.i.d. mobility,
two-dimensional hybrid random walk, one-dimensional i.i.d. mobility, and one-dimensional hybrid random walk. Two mobility time-scales are
investigated in this paper: Fast
mobility where node mobility is at the same time-scale as data transmissions; Slow mobility where node mobility is assumed to occur at a much slower time-scale than
data transmissions. Given a delay constraint D, we first characterize
the optimal multicast capacity for each of the eight types of mobility models,
and then we develop a scheme that can achieve a capacity-delay tradeoff close
to the upper bound up to a logarithmic factor. In addition, we also study
heterogeneous networks with infrastructure support.
IEEE 2013 :DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency in Wireless Mobile Networks
IEEE 2013 Transactions on Mobile Computing
Abstract—This paper proposes distributed cache
invalidation mechanism (DCIM), a client-based cache consistency scheme that is implemented
on top of a previously proposed architecture for caching data items in mobile
ad hoc networks (MANETs), namely COACS, where special nodes cache the queries
and the addresses of the nodes that store the responses to these queries. We
have also previously proposed a server-based consistency scheme, named SSUM,
whereas in this paper, we introduce DCIM that is totally client-based. DCIM is
a pull-based algorithm that implements adaptive time to live (TTL),
piggybacking, and perfecting, and provides near strong consistency
capabilities. Cached data items are assigned adaptive TTL values that
correspond to their update rates at the data source, where items with expired
TTL values are grouped in validation requests to the data source to refresh
them, whereas unexpired ones but with high request rates are prefetched from
the server. In this paper, DCIM is analyzed to assess the delay and bandwidth
gains (or costs) when compared to polling every time and push-based schemes.
DCIM was also implemented using ns2, and compared against client-based and
server-based schemes to assess its performance experimentally. The consistency
ratio, delay, and overhead traffic are reported versus several
variables, where DCIM showed to be superior when compared to the other systems.
IEEE 2013: Generation of Personalized
Ontology Based on Consumer Emotion and Behavior Analysis
IEEE 2013 Transactions on Affective Computing
Abstract—The relationships between consumer emotions and their buying
behaviors have been well documented. Technology-savvy consumers often use the
web to find information on products and services before they commit to buying.
We propose a semantic web usage mining approach for discovering periodic web
access patterns from annotated web usage logs which incorporates information on
consumer emotions and behaviors through self-reporting and behavioral tracking.
We use fuzzy logic to represent real-life temporal concepts (e.g., morning) and
requested resource attributes (ontological domain concepts for the requested
URLs) of periodic pattern-based web access activities. These fuzzy temporal and
resource representations, which contain both behavioral and emotional cues, are
incorporated into a Personal Web Usage Lattice that models the user’s web
access activities. From this, we generate a Personal Web Usage Ontology written
in OWL, which enables semantic web applications such as personalized web
resources recommendation. Finally, we demonstrate the effectiveness of our
approach by presenting experimental results in the context of personalized web
resources recommendation with varying degrees of emotional influence. Emotional
influence has been found to contribute positively to adaptation in personalized
recommendation.
IEEE 2013: Winds of Change: From
Vendor Lock-In to the Meta Cloud
IEEE 2013 Transactions on Internet Computing
Abstract—The cloud
computing paradigm has achieved widespread adoption in recent years. Its
success is due largely to customers’ ability to use services on demand with a
pay-as-you go pricing model, which has proved convenient in many respects. Low
costs and high flexibility make migrating to the cloud compelling. Despite its
obvious advantages, however, many com-panies hesitate to “move to the cloud,”
mainly because of concerns related to service availability, data lock-in, and
legal uncertainties. Lock-in is particularly problematic. For one thing, even though public cloud availability is
gener-ally high, outages still occur.
Businesses locked into such a cloud are Essentially at a standstill
until the cloud is back online. Moreover, public cloud providers generally
don’t guarantee par-ticular service level agreements (SLAs) that are;
businesses locked into a cloud have no guaran-tees that it will continue to
provide the required quality of service (QoS). Finally, most public cloud
providers’ terms of service let that provider unilaterally change pricing at
any time. Hence, a business locked into a cloud has no mid- or long-term
control over its own IT costs
IEEE 2013:Toward a reliable, secure
and fault tolerant smart grid state estimation in the cloud
IEEE 2013 Transactions on Innovative Smart Grid Technologies
Abstract—the
collection and prompt analysis of synchropha-sor measurements is a key step
towards enabling the future smart power grid, in which grid management
applications would be deployed to monitor and react intelligently to changing
conditions. The potential exists to slash inefficiencies and to adaptively
reconfigure the grid to take better advantage of renewable, coordinate and
share reactive power, and to re-duce the risk of catastrophic large-scale
outages. However, to realize this potential, a number of technical challenges
must be overcome. We describe a continuously active, timely monitoring
framework that we have created, architected to support a wide range of
grid-control applications in a standard manner designed to leverage cloud
computing. Cloud computing systems bring significant advantages, including an
elastic, highly available and cost-effective compute infrastructure well-suited
for this application. We believe that by showing how challenges of reliability,
timeliness, and security can be addressed while leveraging cloud standards, our
work opens the door for wider exploitation of the cloud by the smart grid
community. This paper characterizes a PMU-based state- estimation application,
explains how the desired system maps to a cloud architecture, identifies
limitations in the standard cloud infrastructure relative to the needs of this
use-case, and then shows how we adapt the basic cloud platform options with
sophisticated technologies of our own to achieve the required levels of
usability, fault tolerance, and parallelism
IEEE 2013: Security
and Privacy Enhancing Multi-Cloud Architectures
IEEE 2013 Transactions on Dependable and Secure Computing
Abstract—Security
challenges are still amongst the biggest obstacles when considering the
adoption of cloud services. This triggered a lot of research activities,
resulting in a quantity of proposals targeting the various cloud security
threats. Alongside with these security issues the cloud paradigm comes with a
new set of unique features which open the path towards novel security
approaches, techniques and architectures. This paper provides a survey on the
achievable security merits by making use of multiple distinct clouds
simultaneously. Various distinct architectures are introduced and discussed
according to their security and privacy capabilities and prospects.
IEEE 2013: Scalable and Secure Sharing
of Personal Health Records in Cloud Computing using Attribute-based Encryption
IEEE 2013 Transactions on Parallel and Distributed Systems
Abstract—Personal health record (PHR) is an emerging
patient-centric model of health information exchange, which is often outsourced
to be stored at a third party, such as cloud providers. However, there have
been wide privacy concerns as personal health information could be exposed to those
third party servers and to unauthorized parties. To assure the patients’
control over access to their own PHRs, it is a promising method to encrypt the
PHRs before outsourcing. Yet, issues such as risks of privacy exposure,
scalability in key management, flexible access and efficient user revocation,
have remained the most important challenges toward achieving fine-grained,
cryptographically enforced data access control. In this paper, we propose a
novel patient-centric framework and a suite of mechanisms for data access
control to PHRs stored in semi-trusted servers. To achieve fine-grained and
scalable data access control for PHRs, we leverage attribute based encryption
(ABE) techniques to encrypt each patient’s PHR file. Different from previous works
in secure data outsourcing, we focus on the multiple data owner scenario, and
divide the users in the PHR system into multiple security domains that greatly
reduces the key management complexity for owners and users. A high degree of
patient privacy is guaranteed simultaneously by exploiting multi-authority ABE.
Our scheme also enables dynamic codification of access policies or file
attributes, supports efficient on-demand user/attribute revocation and
break-glass access under emergency scenarios. Extensive analytical and
experimental results are presented which show the security, scalability and
efficiency of our proposed scheme
IEEE 2013: Govcloud: Using Cloud Computing
in Public Organizations
IEEE 2013 Transactions on Technology and Society Magazine
Abstract: Governments are fac-ing reductions in ICT budgets just
as users are increasing demands for electronic services. One solution announced
aggressively by vendors is cloud computing. Cloud comput-ing is not a new
technology, but as described by Jackson] is a new way of offering services,
taking into consideration business and economic models for providing and
consuming ICT services. Here we explain the impact and benefits for public
organizations of cloud services and explore issues of why governments are slow
to adopt use of the cloud. The exist-ing literature does not cover this subject
in detail, especially for European organizations
IEEE 2013: Dynamic Resource Allocation
using Virtual Machines for Cloud Computing Environment
IEEE 2013 Transactions on Parallel and Distributed Systems
Cloud
computing allows business customers to scale up and down their resource usage
based on needs. Many of the touted gains in the cloud model come from resource
multiplexing through visualization technology. In this paper, we present a
system that uses visualization technology to allocate data center resources
dynamically based on application demands and support green computing by
optimizing the number of servers in use. We introduce the concept of “skewness”
to measure the unevenness in the multi-dimensional resource utilization of a
server. By minimizing skewness, we can combine different types of workloads
nicely and improve the overall utilization of server resources. We develop a
set of heuristics that prevent overload in the system effectively while saving
energy used. Trace driven simulation and experiment results demonstrate that
our algorithm achieves good performance.
IEEE 2013: Privacy Preserving Delegated Access Control in Public Clouds
IEEE 2013 Transactions on Knowledge and Data Engineering
Abstract—Current approaches to enforce fine-grained access control on confidential data hosted in the cloud are based on fine-grained encryption of the data. Under such approaches, data owners are in charge of encrypting the data before uploading them on the cloud and re-encrypting the data whenever user credentials or authorization policies change. Data owners thus incur high communication and computation costs. A better approach should delegate the enforcement of fine-grained access control to the cloud, so to minimize the overhead at the data owners, while assuring data confidentiality from the cloud. We propose an approach, based on two layers of encryption, that addresses such requirement. Under our approach, the data owner performs a coarse-grained encryption, whereas the cloud performs a fine-grained encryption on top of the owner encrypted data. A challenging issue is how to decompose access control policies (ACPs) such that the two layer encryption can be performed. We show that this problem is NP-complete and propose novel optimization algorithms. We utilize an efficient group key management scheme that supports expressive ACPs. Our system assures the confidentiality of the data and preserves the privacy of users from the cloud while delegating most of the access control enforcement to the cloud.
IEEE 2013: Privacy-Preserving
Public Auditing for Secure Cloud Storage
IEEE 2013 TRANSACTIONS ON CLOUD COMPUTING
Abstract—Using
Cloud Storage, users can remotely store their data and enjoy the on-demand high
quality applications and services from a shared pool of configurable computing
resources, without the burden of local data storage and maintenance. However,
the fact that users no longer have physical possession of the outsourced data
makes the data integrity protection in Cloud Computing a formidable task,
especially for users with constrained computing resources. Moreover, users
should be able to just use the cloud storage as if it is local, without
worrying about the need to verify its integrity. Thus, enabling public audit ability for cloud storage is of critical importance so that users can
resort to a third party auditor (TPA) to check the integrity of outsourced data
and be worry-free. To securely introduce an effective TPA, the auditing process
should bring in no new vulnerabilities towards user data privacy, and introduce
no additional online burden to user. In this paper, we propose a secure cloud
storage system supporting privacy-preserving public auditing. We further extend
our result to enable the TPA to perform audits for multiple users
simultaneously and efficiently. Extensive security and performance analysis
show the proposed schemes are provably secure and highly efficient
IEEE 2013 Transactions on Parallel and Distributed Systems
Abstract—Distributed
file systems are key building blocks for cloud computing applications based on
the Map Reduce programming paradigm. In such file systems, nodes simultaneously
serve computing and storage functions; a file is partitioned into a number of
chunks allocated in distinct nodes so that Map Reduce tasks can be performed in
parallel over the nodes. However, in a cloud computing environment, failure is
the norm, and nodes may be upgraded, replaced, and added in the system. Files
can also be dynamically created, deleted, and appended. This results in load
imbalance in a distributed file system; that is, the file chunks are not
distributed as uniformly as possible among the nodes. Emerging distributed file
systems in production systems strongly depend on a central node for chunk
reallocation. This dependence is clearly inadequate in a large-scale, failure-prone
environment because the central load balance is put under considerable
workload that is linearly scaled with the system size, and may thus become the
performance bottleneck and the single point of failure. In this paper, a fully
distributed load re balancing algorithm is presented to cope with the load
imbalance problem. Our algorithm is compared against a centralized approach in
a production system and a competing distributed solution presented in the
literature. The simulation results indicate that our proposal is comparable
with the existing centralized approach and considerably outperforms the prior
distributed algorithm in terms of load imbalance factor, movement cost, and
algorithmic overhead. The performance of our proposal implemented in the Hadoop
distributed file system is further investigated in a cluster environment.
IEEE 2013: A Load Balancing Model Based on Cloud Partitioning for the Public Cloud
IEEE 2013 TRANSACTIONS ON CLOUD COMPUTING
Abstract: Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
IEEE 2013: SPOC: A Secure and Privacy-preserving Opportunistic Computing Framework for Mobile-Healthcare Emergency
IEEE 2013 Transactions on Parallel and Distributed Systems
Abstract—With the pervasiveness of smart phones and the advance of wireless body sensor networks (BSNs), mobile Healthcare (m-Healthcare), which extends the operation of Healthcare provider into a pervasive environment for better health monitoring, has attracted considerable interest recently. However, the flourish of m-Healthcare still faces many challenges including information security and privacy preservation. In this paper, we propose a secure and privacy-preserving opportunistic computing framework, called SPOC, for m-Healthcare emergency. With SPOC, smart phone resources including computing power and energy can be opportunistically gathered to process the computing-intensive personal health information (PHI) during m-Healthcare emergency with minimal privacy disclosure. In specific, to leverage the PHI privacy disclosure and the high reliability of PHI process and transmission in m-Healthcare emergency, we introduce an efficient user-centric privacy access control in SPOC framework, which is based on an attribute-based access control and a new privacy-preserving scalar product computation (PPSPC) technique, and allows a medical user to decide who can participate in the opportunistic computing to assist in processing his overwhelming PHI data. Detailed security analysis shows that the proposed SPOC framework can efficiently achieve user-centric privacy access control in m-Healthcare emergency. In addition, performance evaluations via extensive simulations demonstrate the SPOC’s effectiveness in term of providing high reliable PHI process and transmission while minimizing the privacy disclosure during m-Healthcare emergency.
IEEE 2013: Vampire Attacks: Draining Life from Wireless
Ad Hoc Sensor Networks
IEEE 2013 Transaction on Mobile Computing
Technology - Available in Java & .Net
Abstract— Ad hoc low-power wireless networks are an exciting research direction in sensing and pervasive computing. Prior security work in this area has focused primarily on denial of communication at the routing or medium access control levels. This paper explores resource depletion attacks at the routing protocol layer, which permanently disable networks by quickly draining nodes' battery power. These "Vampire” attacks are not specific to any specific protocol, but rather rely on the properties of many popular classes of routing protocols. We find that all examined protocols are susceptible to Vampire attacks, which are devastating, difficult to detect, and are easy to carry out using as few as one malicious insider sending only protocol-compliant messages. In the worst case, a single Vampire can increase network-wide energy usage by a factor of O(N), where N in the number of network nodes. We discuss methods to mitigate these types of attacks, including a new proof-of-concept protocol that provably bounds the damage caused by Vampires during the packet forwarding phase.
IEEE 2012: Expert Discovery and Interactions in Mixed Service-Oriented
Systems
IEEE 2012 TRANSACTIONS ON SERVICES COMPUTING
Abstract— Web-based collaborations and processes
have become essential in today’s business environments. Such processes typically
span interactions between people and services across globally distributed
companies. Web services and SOA are the defacto technology to implement
compositions of humans and services. The increasing complexity of compositions
and the distribution of people and services require adaptive and context-aware
interaction models. To support complex interaction scenarios, we introduce a mixed
service-oriented system composed of both human-provided and software-based
services interacting to perform joint activities or to solve emerging problems.
However, competencies of people evolve over time, thereby requiring approaches
for the automated management of actor skills, reputation, and trust.
Discovering the right actor in mixed service-oriented systems is challenging
due to scale and temporary nature of collaborations. We present a novel
approach addressing the need for flexible involvement of experts and knowledge
workers in distributed collaborations. We argue that the automated inference of
trust between members is a key factor for successful collaborations. Instead
of following a security perspective on trust, we focus on dynamic trust in
collaborative networks. We discuss Human-Provided Services (HPS) and an
approach for managing user preferences and network structures. HPS allows experts
to offer their skills and capabilities as services that can be requested on
demand. Our main contributions center around a context-sensitive trust-based
algorithm called Expert HITS inspired by the concept of hubs and authorities in
Web-based environments. Expert HITS takes trust-relations and link properties
in social networks into account to estimate the reputation of users.
IEEE 2012 Cooperative Download in Vehicular Environments
Abstract—We consider a complex (i.e., non-linear) road scenario where users aboard vehicles equipped with communication interfaces are interested in downloading large files from road-side Access Points (APs). We investigate the possibility of exploiting opportunistic encounters among mobile nodes so to augment the transfer rate experienced by vehicular downloaders. To that end, we devise solutions for the selection of carriers and data chunks at the APs, and evaluate them in real-world road topologies, under different AP deployment strategies. Through extensive simulations, we show that carry & forward transfers can significantly increase the download rate of vehicular users in urban/suburban environments, and that such a result holds throughout diverse mobility scenarios, AP placements and network loads.
IEEE 2012 Privacy and Integrity Preserving Range Queries in Sensor Networks
Abstract—The architecture of two-tiered sensor networks, where storage nodes serve as an intermediate tier between sensors and a sink for storing data and processing queries, has been widely adopted because of the benefits of power and storage saving for sensors as well as the efficiency of query processing. However, the importance of storage nodes also makes them attractive to attackers. In this paper, we propose SafeQ, a protocol that prevents attackers from gaining information from both sensor collected data and sink issued queries. SafeQ also allows a sink to detect compromised storage nodes when they misbehave. To preserve privacy, SafeQ uses a novel technique to encode both data and queries such that a storage node can correctly process encoded queries over encoded data without knowing their values. To preserve integrity, we propose two schemes—one using Merkle hash trees and another using a new data structure called neighborhood chains—to generate integrity verification information so that a sink can use this information to verify whether the result of a query contains exactly the data items that satisfy the query. To improve performance, we propose an optimization technique using Bloom filters to reduce the communication cost between sensors and storage nodes.
IEEE 2012: Privacy-Preserving Multi-keyword Ranked Search over
Encrypted Cloud Data
Abstract— With
the advent of cloud computing, data owners are motivated to outsource their
complex data management systems from local sites to commercial public cloud for
great flexibility and economic savings. But for protecting data privacy, sensitive
data has to be encrypted before outsourcing, which obsoletes traditional data
utilization based on plaintext keyword search. Thus, enabling an encrypted
cloud data search service is of paramount importance. Considering the large
number of data users and documents in cloud, it is crucial for the search
service to allow multi-keyword query and provide result similarity ranking to
meet the effective data retrieval need. Related works on searchable encryption
focus on single keyword search or Boolean keyword search, and rarely
differentiate the search results. In this paper, for the first time, we define
and solve the challenging problem of privacy-preserving multi-keyword ranked
search over encrypted cloud data (MRSE), and establish a set of strict privacy requirements
for such a secure cloud data utilization system to become a reality. Among various
multi-keyword semantics, we choose the efficient principle of “coordinate
matching”, i.e., as many matches as possible, to capture the similarity between
search query and data documents, and further use “inner product similarity” to
quantitatively formalize such principle for similarity measurement. We first
propose a basic MRSE scheme using secure inner product computation, and then
significantly improve it to meet different privacy requirements in two levels
of threat models. Thorough analysis investigating privacy and efficiency guarantees
of proposed schemes is given, and experiments on the real-world dataset further
show proposed schemes indeed introduce low overhead on computation and
communication.
IEEE 2012: Ensuring Distributed Accountability for Data Sharing in
the Cloud
IEEE 2012 TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING
Abstract— Cloud
computing enables highly scalable services to be easily consumed over the
Internet on an as-needed basis. A major feature of the cloud services is that
users’ data are usually processed remotely in unknown machines that users do
not own or operate. While enjoying the convenience brought by this new emerging
technology, users’ fears of losing control of their own data (particularly, financial
and health data) can become a significant barrier to the wide adoption of cloud
services. To address this problem, in this paper, we propose a novel highly
decentralized information accountability framework to keep track of the actual
usage of the users’ data in the cloud. In particular, we propose an
object-centered approach that enables enclosing our logging mechanism together
with users’ data and policies. We leverage the JAR programmable capabilities to
both create a dynamic and traveling object, and to ensure that any access to users’ data will
trigger authentication and automated logging local to the JARs. To strengthen
user’s control, we also provide distributed auditing mechanisms. We provide
extensive experimental studies that demonstrate the efficiency and
effectiveness of the proposed approaches.
IEEE 2012: Towards Secure and Dependable Storage Services in Cloud
Computing
Abstract— Cloud
storage enables users to remotely store their data and enjoy the on-demand high
quality cloud applications without the burden of local hardware and software
management. Though the benefits are clear, such a service is also relinquishing
users’ physical possession of their outsourced data, which inevitably poses new
security risks towards the correctness of the data in cloud. In order to
address this new problem and further achieve a secure and dependable cloud
storage service, we propose in this paper a flexible distributed storage
integrity auditing mechanism, utilizing the homomorphic token and distributed
erasure-coded data. The proposed design allows users to audit the cloud storage
with very lightweight communication and computation cost. The auditing result not
only ensures strong cloud storage correctness guarantee, but also
simultaneously achieves fast data error localization, i.e., the identification
of misbehaving server. Considering the cloud data are dynamic in nature, the proposed
design further supports secure and efficient dynamic operations on outsourced
data, including block modification, deletion, and append. Analysis shows the
proposed scheme is highly efficient and resilient against Byzantine failure,
malicious data modification attack, and even server colluding attacks
IEEE 2012:A Secure Intrusion
detection system against DDOS attack in Wireless Mobile Ad-hoc Network
IEEE 2012
INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS
Abstract— Wireless Mobile ad-hoc
network (MANET) is an emerging technology and have great strength to be applied
in critical situations like battlefields and commercial applications such as
building, traffic surveillance, MANET is infrastructure less, with no any
centralized controller exist and also each node contain routing capability,
Each device in a MANET is independently free to move in any direction, and will
therefore change its connections to other devices frequently. So one of the
major challenges wireless mobile ad-hoc networks face today is security,
because no central controller exists. MANETs are a kind of wireless ad hoc
networks that usually has a routable networking environment on top of a link
layer ad hoc network. Ad hoc also contains wireless sensor network so the
problems is facing by sensor network is also faced by MANET. While developing
the sensor nodes in unattended environment increases the chances of various
attacks. There are many security attacks in MANET and DDoS (Distributed denial
of service) is one of them. Our main aim is seeing the effect of DDoS in
routing load, packet drop rate, end to end delay, i.e. maximizing due to attack
on network. And with these parameters and many more also we build secure IDS to
detect this kind of attack and block it. In this paper we discussed some
attacks on MANET and DDOS also and provide the security against the DDOS attack.
in Cloud Computing
IEEE 2012 TRANSACTIONS
ON INFORMATION FORENSICS AND SECURITY
Abstract— Cloud
computing has emerged as one of the most influential paradigms in the IT
industry in recent years. Since this new computing technology requires users to
entrust their valuable data to cloud providers, there have been increasing
security and privacy concerns on outsourced data. Several schemes employing attribute-based
encryption (ABE) have been proposed for access control of outsourced data in
cloud computing; however, most of them suffer from inflexibility in
implementing complex access control policies. In order to realize scalable,
flexible, and fine-grained access control of outsourced data in cloud
computing, in this paper, we propose hierarchical attribute-set-based
encryption (HASBE) by extending cipher text-policy attribute-set-based
encryption (ASBE) with a hierarchical structure of users. The proposed scheme
not only achieves scalability due to its hierarchical structure, but also
inherits flexibility and fine-grained access control in supporting compound
attributes of ASBE. In addition, HASBE employs multiple value assignments for
access expiration time to deal with user revocation more efficiently than
existing schemes. We formally prove the security of HASBE based on security of
the cipher text-policy attribute-based encryption (CP-ABE) scheme by Bettencourt
et al. and analyze its performance and computational complexity. We implement
our scheme and show that it is both efficient and flexible in dealing with
access control for outsourced data in cloud computing with comprehensive
experiments.
IEEE 2012 COMMUNICATION SYSTEMS AND NETWORK TECHNOLOGIES
Abstract— Maintaining the secrecy and confidentiality of images is a vibrant area of research, with two different approaches being followed, the first being encrypting the images through encryption algorithms using keys, the other approach involves dividing the image into random shares to maintain the images secrecy. Unfortunately heavy computation cost and key management limit the employment of the first approach and the poor quality of the recovered image from the random shares limit the applications of the second approach. In this paper we propose a novel approach without the use of encryption keys. The approach employs Sieving, Division and Shuffling to generate random shares such that with minimal computation, the original secret image can be recovered from the random shares without any loss of image quality.
IEEE 2012: Separable Reversible Data
Hiding in Encrypted Image
IEEE 2012 TRANSACTIONS ON
INFORMATION FORENSICS AND SECURITY
Abstract— This work
proposes a novel scheme for separable reversible data hiding in encrypted
images. In the first phase, a content owner encrypts the original uncompressed
image using an encryption key. Then, a data-hider may compress the least
significant bits of the encrypted image using a data-hiding key to create a
sparse space to accommodate some additional data. With an encrypted image
containing additional data, if a receiver has the data-hiding key, he can
extract the additional data though he does not know the image content. If the
receiver has the encryption key, he can decrypt the received data to obtain an
image similar to the original one, but cannot extract the additional data. If
the receiver has both the data-hiding key and the encryption key, he can
extract the additional data and recover the original content without any error
by exploiting the spatial correlation in natural image when the amount of
additional data is not too large.
IEEE 2012:The Future of Cloud-Based
Entertainment
IEEE 2012 JOURNALS
& MAGAZINES
Abstract— This paper notes some signification trends
related to
the Internet and Becloud computing that will change the way entertainment
is delivered and experienced. After extrapolating some general conclusions from these
trends, two scenarios
are described to illustrate predicted entertainment experiences.
IEEE 2012: AMPLE: An
Adaptive Traffic Engineering System Based on Virtual Routing Topologies
IEEE 2012
COMMUNICATIONS MAGAZINE
Abstract— Handling traffic
dynamics in order to avoid network congestion and subsequent service
disruptions is one of the key tasks performed by contemporary network
management systems. Given the simple but rigid routing and forwarding
functionalities in IP base environments, efficient resource management and
control solutions against dynamic traffic conditions is still yet to be
obtained. In this article, we introduce AMPLE — an efficient traffic
engineering and management system that performs adaptive traffic control by
using multiple virtualized routing topologies. The proposed system consists of
two compel monetary components: offline link weight optimization that takes as
input the physical network topology and tries to produce maximum routing path
diversity across multiple virtual routing topologies for long term operation
through the optimized setting of link weights. Based on these diverse paths,
adaptive traffic control performs intelligent traffic splitting across
individual routing topologies in reaction to the monitored network dynamics at
short timescale. According to our evaluation with real network topologies and
traffic traces, the proposed system is able to cope almost optimally with
unpredicted traffic dynamics and, as such, it constitutes a new proposal for
achieving better quality of service and overall network performance in IP
networks.
IEEE 2012 NETWORKING
Abstract— In this paper, a
distributed adaptive opportunistic routing scheme for multi-hop wireless ad-hoc
networks is proposed. The proposed scheme utilizes a reinforcement learning
framework to opportunistically route the packets even in the absence of
reliable knowledge about channel statistics and network model. This scheme is
shown to be optimal with respect to an expected average per packet reward
criterion. The proposed routing scheme jointly addresses the issues of learning
and routing in an opportunistic context, where the network structure is
characterized by the transmission success probabilities. In particular, this
learning framework leads to a stochastic routing scheme which optimally
“explores” and “exploits” the opportunities in the network.
IEEE 2012 TRANSACTIONS ON WIRELESS COMMUNICATIONS
Abstract— Cooperative
communication has received tremendous interest for wireless networks. Most
existing works on cooperative communications are focused on link-level physical
layer issues. Consequently, the impacts of cooperative communications on
network-level upper layer issues, such as topology control, routing and network
capacity, are largely ignored. In this article, we propose a Capacity-Optimized
Cooperative (COCO) topology control scheme to improve the network capacity in
MANETs by jointly considering both upper layer network capacity and physical
layer cooperative communications. Through simulations, we show that physical
layer cooperative communications have significant impacts on the network
capacity, and the proposed topology control scheme can substantially improve
the network capacity in MANETs with cooperative communications
IEEE 2012 TRANSACTIONS ON NETWORKING
Abstract— Precongistion notification
(PCN) is a packet-marking technique for IP networks to notify egress nodes of a
so-called PCN domain whether the traffic rate on some links exceeds certain
configurable bounds. This feedback is used by decision points for admission
control (AC) to block new flows when the traffic load is already high.
PCN-based AC is simpler than other AC methods because interior routers do not
need to keep per-flow states. Therefore, it is currently being standardized by
the IETF. We discuss various realization options and analyze their performance
in the presence of flash crowds or with multipath routing by means of
simulation and mathematical modeling. Such situations can be aggravated by
insufficient flow aggregation, long round-trip times, on/off traffic, delayed
media, inappropriate marker configuration, and smoothed feedback
IEEE 2012: A Novel Profit Maximizing
Metric for Measuring Classification Performance of Customer Churn Prediction
Models
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
Abstract— The
interest for data mining techniques has increased tremendously during the past
decades, and numerous classification techniques have been applied in a wide
range of business applications. Hence, the need for adequate performance
measures has become more important than ever. In this paper, a cost benefit
analysis framework is formalized in order to define performance measures which
are aligned with the main objectives of the end users, i.e. profit
maximization. A new performance measure is defined, the expected maximum profit
criterion. This general framework is then applied to the customer churn problem
with its particular cost benefit structure. The advantage of this approach is
that it assists companies with selecting the classifier which maximizes the
profit. Moreover, it aids with the practical implementation in the sense that it
provides guidance about the fraction of the customer base to be included in the
retention campaign.
IEEE TRANSACTIONS ON
SYSTEMSAUGUST 2012
Abstract— Web prediction is a classification problem in which
we attempt to predict the next set of Web pages that a user may visit based on
the knowledge of the previously visited pages. Predicting user’s behavior while
serving the Internet can be applied effectively in various critical applications.
Such application has traditional tradeoffs between modeling complexity and
prediction accuracy. In this paper, we analyze and study Markov model and all-Kth
Markov model in Web prediction. We propose a new modified Markov model to
alleviate the issue of scalability in the number of paths. In addition, we
present a new two-tier prediction framework that creates an example classifier EC,
based on the training examples and the generated classifiers. We show that such
framework can improve the prediction time without compromising Prediction
accuracy. We have used standard benchmark data sets to analyze, compare, and
demonstrate the effectiveness of our techniques using variations of Markov
models and association rule mining. Our experiments show the effectiveness of
our modified Markov model in reducing the number of paths without compromising accuracy.
Additionally, the results support our analysis conclusions that accuracy
improves with higher orders of all-Kth model.
IEEE 2012
Transactions on Cloud Computing
Abstract — Cloud-based outsourced storage relieves the client’s burden for
storage management and maintenance by providing a comparably low-cost,
scalable, location-independent platform. However, the fact that clients no
longer have physical possession of data indicates that they are facing a potentially
formidable risk for missing or corrupted data. To avoid the security risks,
audit services are critical to ensure the integrity and availability of
outsourced data and to achieve digital forensics and credibility on cloud
computing. Provable data possession (PDP),
which is a cryptographic technique for verifying the integrity of data without
retrieving it at an untrusted server, can be used to realize audit services. In
this paper, profiting from the interactive zero-knowledge proof system, we address
the construction of an interactive PDP protocol to prevent the fraudulence of
prover (soundness property) and the leakage of verified data (zero-knowledge
property). We prove that our construction holds these properties based on the
computation Diffie–Hellman assumption and the rewindable black-box knowledge
extractor. We also propose an efficient mechanism with respect to probabilistic
queries and periodic verification to reduce the audit costs per verification
and implement abnormal detection timely. In addition, we present an efficient
method for selecting an optimal parameter value to minimize computational
overheads of cloud audit services. Our experimental results demonstrate the
effectiveness of our approach.
IEEE 2012 - 45th
Hawaii International Conference on System Sciences
Abstract — The use of cloud computing has increased rapidly in many
organizations. Cloud computing provides many benefits in terms of low cost and
accessibility of data. Ensuring the security of cloud computing is a major
factor in the cloud computing environment, as users often store sensitive
information with cloud storage providers but these providers may be untrusted.
Dealing with “single cloud” providers is predicted to become less popular with
customers due to risks of service availability failure and the possibility of
malicious insiders in the single cloud. A movement towards “multi-clouds”, or
in other words,“interclouds” or “cloud-of-clouds” has emerged recently. This paper surveys recent research related to single
and multi-cloud security and addresses possible solutions. It is found that the
research into the use of multi-cloud providers to maintain security has
received less attention from the research community than has the use of single
clouds. This work aims to promote the use of multi-clouds due to its ability to
reduce security risks that affect the cloud computing user.
IEEE 2012: Scalable and
Secure Sharing of Personal Health Records in Cloud Computing using
Attribute-based Encryption
IEEE
2012 TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
Abstract— Personal
health record (PHR) is an emerging patient-centric model of health information
exchange, which is often outsourced to be stored at a third party, such as
cloud providers. However, there have been wide privacy concerns as personal
health information could be exposed to those third party servers and to
unauthorized parties. To assure the patients’ control over access to their
own PHRs, it is a promising method to encrypt the PHRs before outsourcing. Yet,
issues such as risks of privacy exposure, scalability in key management,
flexible access and efficient user revocation, have remained the most important
challenges toward achieving
fine-grained, photographically enforced data access control. In this
paper, we propose a novel patient-centric framework and a suite of
mechanisms for data access control to PHRs stored in semi-trusted servers. To
achieve fine-grained and scalable data access control for PHRs, we
leverage attribute based encryption (ABE) techniques to encrypt each patient’s
PHR file. Different from previous works in secure data outsourcing, we
focus on the multiple data owner scenario, and divide the users in the PHR
system into multiple security domains that greatly reduces the key management
complexity for owners and users. A high degree of patient privacy is
guaranteed simultaneously by exploiting multi-authority ABE. Our scheme also
enables dynamic modification of access policies or file attributes,
supports efficient on-demand user/attribute revocation and break-glass access
under emergency scenarios. Extensive analytically and experimental
results are presented which show the security, scalability and efficiency of
our proposed scheme.
IEEE COMMUNICATION
SYSTEMS AND NETWORKS (COMSNETS), 2012
Abstract— Cloud computing poses new security and access control
challenges as the users outsource their sensitive data onto cloud storage. The
outsourced data should be protected from unauthorized users access including
the honest-but-curious cloud servers those hosts the data. In this paper, we
propose two access control mechanisms based on (1) Polynomial interpolation
technique and (2) Multi linear map. In these schemes, the authorized user need
to store only a single key material irrespective of number of data items to
which he has authorized access.
IEEE TRANSACTIONS ON
KNOWLEDGE AND DATA ENGINEERING, APRIL 2012
Abstract— Preparing a data set for analysis is generally the most
time consuming task in a data mining project, requiring many complex SQL
queries, joining tables, and aggregating columns. Existing SQL aggregations
have limitations to prepare data sets because they return one column per
aggregated group. In general, a significant manual effort is required to build
data sets, where a horizontal layout is required. We propose simple, yet
powerful, methods to generate SQL code to return aggregated columns in a horizontal
tabular layout, returning a set of numbers instead of one number per row. This
new class of functions is called horizontal aggregations. Horizontal
aggregations build data sets with a horizontal renormalized layout (e.g.,
point-dimension, observation variable, instance-feature), which is the standard
layout required by most data mining algorithms. We propose three fundamental
methods to evaluate horizontal aggregations: CASE: Exploiting the programming
CASE construct; SPJ: Based on standard relational algebra operators (SPJ
queries); PIVOT: Using the PIVOT operator, which is offered by some DBMSs.
Experiments with large tables compare the proposed query evaluation methods.
Our CASE method has similar speed to the PIVOT operator and it is much faster
than the SPJ method. In general, the CASE and PIVOT methods exhibit linear
scalability, whereas the SPJ method does not.
IEEE 2012
INTERNATIONAL CONFERENCE ON COMPUTING, ELECTRONICS AND ELECTRICAL TECHNOLOGIES
Abstract— Core banking is a set of services provided by a group of
networked bank branches. Bank customers may access their funds and perform
other simple transactions from any of the member branch offices. The major
issue in core banking is the authenticity of the customer. Due to unavoidable
hacking of the databases on the internet, it is always quite difficult to trust
the information on the internet. To solve this problem of authentication, we
are proposing an algorithm based on image processing, improved Steganography
and visual cryptography. This paper proposes technique of encode the password
of a customer by improved Steganography, most of the Steganography techniques
use either three or four adjacent pixels around a target pixel whereas the
proposed technique is able to utilize at most all eight adjacent neighbors so
that imperceptibility value grows bigger. and then dividing it into shares.
Total number of shares to be created is depending on the scheme chosen by the
bank. When two shares are created, one is stored in the Bank database and the
others kept by the customer. The customer has to present the share during all
of his transactions. This share is stacked with the first share to get the
original image. Then decoding method issued to take the hidden password on
acceptance or rejection of the output and authenticate the customer
IEEE TRANSACTION ON
PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Abstract— We propose an efficient and robust solution for image set
classification. A joint representation of an image set is proposed which
includes the image samples of the set and their affine hull model. The model
accounts for unseen appearances in the form of affine combinations of sample
images. To calculate the between-set distance, we introduce the Sparse
Approximated Nearest Point (SANP). SANPs are the nearest points of two image
sets such that each point can be sparsely approximated by the image samples of its
respective set. This novel sparse formulation enforces sparsity on the sample
coefficients and jointly optimizes the nearest points as well as their sparse
approximations. Unlike standard sparse coding, the data to be sparsely
approximated is not fixed. A convex formulation is proposed to find the optimal
SANPs between two sets and the accelerated proximal gradient method is adapted
to efficiently solve this optimization. We also derive the kernel extension
of the SANP and propose an algorithm for dynamically tuning the RBF
kernel parameter while matching each pair of image sets. Comprehensive
experiments on the UCSD/Honda, CMU MoBo and YouTube Celebrities face datasets
show that our method consistently outperforms the state-of-the-art.
2012 IEEE TRANSACTIONS
ON MULTIMEDIA
Abstract— Increasingly developed social sharing websites, like
Flicker and YouTube, allow users to create, share, annotate and comment medias.
The large-scale user-generated meta-data not only facilitate users in sharing
and organizing multimedia content, but provide useful information to improve
media retrieval and management. Personalized search serves as one of such
examples where the web search experience is improved by generating the returned
list according to the modified user search intents. In this paper, we exploit
the social annotations and propose a novel framework simultaneously considering
the user and query relevance to learn to personalized image search. The basic
premise is to embed the user preference and query-related search intent into
user-specific topic spaces. Since the users’ original annotation is too sparse
for topic modeling, we need to enrich users’ annotation pool before
user-specific topic spaces construction. The proposed framework contains two
components: 1) A Ranking based Multi-correlation Tensor Factorization model is
proposed to perform annotation prediction, which is considered as users’
potential annotations for the images; 2) We introduce
User-specific Topic
Modeling to map the query relevance and user preference into the same
user-specific topic space. For performance evaluation, two resources involved
with users’ social Activities are
employed. Experiments on a large-scale Flicker dataset demonstrate the
effectiveness of the proposed method.
IEEE 2012: Expandable and
Cost-Effective Network Structures for Data Centers Using Dual-Port Servers
IEEE 2012 TRANSACTIONS
ON COMPUTERS
Abstract— A fundamental goal of data-center networking is to
efficiently interconnect a large number of servers with the low equipment cost.
Several server-centric network structures for data centers have been proposed.
They, however, are not truly expandable and suffer a low degree of regularity
and symmetry. Inspired by the commodity servers in today’s data centers that
come with dual-port, we consider how to build expandable and cost-effective
structures without expensive high-end switches and additional hardware on
servers except the two NIC ports. In this paper, two such network structures,
called HCN and BCN, are designed, both of which are of server degree 2. We also
develop the low-overhead and robust routing mechanisms for HCN and BCN.
Although the server degree is only 2, HCN can be expanded very easily to
encompass hundreds of thousands servers with the low diameter and high
bisection width. Additionally, HCN offers a high degree of regularity,
scalability and symmetry, which conform to the modular designs of data centers.
BCN is the largest known network structure for data centers with the server
degree 2 and network diameter 7. Furthermore, BCN has many attractive features,
including the low diameter, high bisection width, large number of node-disjoint
paths for the one to- one traffic, and good fault-tolerant ability.
Mathematical analysis and comprehensive simulations show that HCN and BCN
possess excellent topological properties and are viable network structures for
data centers
Abstract— With the advent of internet, various online attacks has been
increased and among them the most popular attack is phishing. Phishing is an
attempt by an individual or a group to get personal confidential information
such as passwords, credit card information from unsuspecting victims for
identity theft, financial gain and other fraudulent activities. Fake websites
which appear very similar to the original ones are being hosted to achieve
this. In this paper we have proposed a new approach named as "A Novel
Anti-phishing framework based on visual cryptography "to solve the problem
of phishing. Here an image based authentication using Visual Cryptography is
implemented. The use of visual cryptography is explored to preserve the privacy
of an image captcha by decomposing the original image captcha into two shares
(known as sheets) that are stored in separate database servers(one with user
and one with server) such that the original image captcha can be revealed only
when both are simultaneously available; the individual sheet images do not reveal
the identity of the original image captcha. Once the original image captcha is
revealed to the user it can be used as the password. Using this website cross
verifies its identity and proves that it is a genuine website before the end
users.
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING APRIL 2012
Abstract—Byzantine-fault-tolerant replication enhances the availability
and reliability of Internet services that store critical state and preserve it despite attacks or software errors.
However, existing Byzantine-fault-tolerant storage systems either assume a
static set of replicas, or have limitations in how they handle
reconfigurations (e.g., in terms of the scalability of the solutions or the
consistency levels they provide). This can be problematic in long-lived,
large-scale systems where system membership is likely to change during the
system lifetime. In this paper, we present a complete solution for dynamically
changing system membership in a large-scale Byzantine-fault-tolerant system.
We present a service that tracks system membership and periodically notifies
other system nodes of membership changes. The membership service runs
mostly automatically, to avoid human configuration errors; is itself Byzantine
fault-tolerant and reconfigurable; and provides applications with a sequence of
consistent views of the system membership. We demonstrate the utility of
this membership service by using it in a novel distributed hash table called
dBQS that provides atomic semantics even across changes in replica sets.
dBQS is interesting in its own right because its storage algorithms extend
existing Byzantine quorum protocols to handle changes in the replica set,
and because it differs from previous DHTs by providing Byzantine fault
tolerance and offering strong semantics. We implemented the membership service
and dBQS. Our results show that the approach works well, in practice: the
membership service is able to manage a large system and the cost to change the
system membership islow.
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING-FEBRUARY 2012
Abstract—Trust Negotiation has shown to be a successful, policy-driven
approach for automated trust establishment, through there lease of digital credentials. Current real applications require new
flexible approaches to trust negotiations, especially in light of the widespread
use of mobile devices. In this paper, we present a multisession dependable
approach to trust negotiations. The proposed framework supports voluntary
and unpredicted interruptions, enabling the negotiating parties to complete the
negotiation despite temporary unavailability of resources. Our protocols
address issues related to validity, temporary loss of data, and
extended unavailability of one of the two negotiators. A peer is able to
suspend an ongoing negotiation and resume it with another (authenticated)
peer. Negotiation portions and intermediate states can be safely and privately
passed among peers, to guarantee the stability needed to continue
suspended negotiations. We present a detailed analysis showing that our
protocols have several key properties, including validity, correctness,
and minimality. Also, we show how our negotiation protocol can withstand the
most significant attacks. As by our complexity analysis, the introduction
of the suspension and recovery procedures and mobile negotiations does not
significantly increase the complexity of ordinary negotiations. Our protocols
require a constant number of messages whose size linearly depend on the
portion of trust negotiation that has been carried before the suspensions.
Abstract— There is an increasing need for fault tolerance
capabilities in logic devices brought about by the scaling of transistors to ever
smaller geometries. This paper presents a hypervisor-based replication approach
that can be applied to commodity hardware to allow for virtually lock stepped
execution. It offers many of the benefits of hardware-based lockstep while
being cheaper and easier to implement and more flexible in the configurations
supported. A novel form of processor state fingerprinting is also presented,
which can significantly reduce the fault detection latency. This further
improves reliability by triggering rollback recovery before errors are recorded
to a checkpoint. The mechanisms are validated using a full prototype and the
benchmarks considered indicate an average performance overhead of approximately
14 percent with the possibility for significant optimization. Finally, a unique
method of using virtual lockstep for fault injection testing is presented and
used to show that significant detection latency reduction is achievable by
comparing only a small amount of data across replicas.
IEEE 2012 Transactions
on Dependable and Secure Computing
Abstract— The multi-hop routing in wireless sensor networks (WSNs)
offers little protection against identity deception through replaying routing
information. An adversary can exploit this defect to launch various harmful or
even devastating attacks against the routing protocols, including sinkhole attacks,
wormhole attacks and Sybil attacks. The situation is further
aggravated by mobile and harsh network conditions. Traditional cryptographic
techniques or efforts at developing trust-aware routing protocols do not
effectively address this severe problem. To secure the WSNs against adversaries
misdirecting the multi-hop routing, we have designed and implemented TARF, a
robust trust-aware routing framework for dynamic WSNs. Without tight time
synchronization or known geographic information, TARF provides trustworthy and
energy-efficient route. Most importantly, TARF proves effective against those
harmful attacks developed out of identity deception; the resilience of TARF is
verified through extensive evaluation with both simulation and empirical
experiments on large-scale WSNs under various scenarios including mobile and
RF-shielding network conditions. Further, we have implemented a low-overhead
TARF module in TinyOS; as demonstrated, this implementation can be incorporated
into existing routing protocols with the least effort. Based on TARF, we also
demonstrated a proof-of-concept mobile target detection application that
functions well against an anti-detection
mechanism.
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE
COMPUTING, FEBRUARY 2012
Abstract— Brute force and dictionary attacks on password-only remote
login services are now widespread and ever increasing. Enabling convenient
login for legitimate users while preventing such attacks is a difficult
problem. Automated Turing Tests (ATTs) continue to be an effective,
easy-to-deploy approach to identify automated malicious login attempts with
reasonable cost of inconvenience to users. In this paper, we discuss the
inadequacy of existing and proposed login protocols designed to address large
scale online dictionary attacks (e.g., from a Botnet of hundreds of thousands
of nodes). We propose a new Password Guessing Resistant Protocol (PGRP),
derived upon revisiting prior proposals designed to restrict such attacks.
While PGRP limits the total number of login attempts from unknown remote hosts
to as low as a single attempt per username, legitimate users in most cases
(e.g., when attempts are made from known, frequently-used machines) can make
several failed login attempts before being challenged with an ATT. We analyze
the performance of PGRP with two real-world data sets and find it more
promising than existing proposals.
IEEE 2012 AUTOMATION
& COMPUTING
Abstract— The development of high speed Internet access, Web 2.0
applications and Virtualization techniques have made Cloud computing a leading
edge technology. A user in ‘Cloud’ runs web based application over Internet via
browser with a look and feel of desktop program. Cloud computing provides
dynamically scalable and virtualized resources as a service over the network at
a nominal initial investment. Data-center works as backbone in Cloud computing
where a large number of servers are networked to host computing & storage
needs of the users. The area which needs more attention is Latency Optimization
for cloud architecture to work as ubiquitous as expected. Many data intensive
applications produce enormous amounts of data which travel on cloud network. As
the cloud users grow, cloud architecture should accommodate movement of
voluminous data to avoid data congestion in the network. In this paper, an
intelligent & energy efficient Cloud computing architecture is proposed
based on distributed data-centers to support application and data access from
local data-center with minimum latencies. It was found that the proposed
architecture is efficient for business ntre premiers, suitable to apply for
e-Governance and provides a green eco-friendly environment for Cloud computing.
IEEE 2012: An Improved Reversible Data Hiding in
Encrypted Images Using Side Match
IEEE SIGNAL PROCESSING LETTERS, APRIL 2012
Abstract—This letter proposes an improved version of Zhang’s reversible
data hiding method in encrypted images. The original work partitions an
encrypted image into blocks, and each block carries one bit by flipping three
LSBs of a set of pre-defined pixels. The data extraction and image recovery can
be achieved by examining the block smoothness. Zhang’s work did not fully
exploit the pixels in calculating the smoothness of each block and did not
consider the pixel correlations in the border of neighboring blocks. These two
issues could reduce the correctness of data extraction. This letter adopts a
better scheme for measuring the smoothness of blocks, and uses the side-match
scheme to further decrease the error rate of extracted-bits. The experimental
results reveal that the proposed method offers better performance over Zhang’s
work. For example, when the block size is set to 8 8, the error rate of the
Lena image of the proposed method is 0. 34%, this is significantly lower than
1.21% of Zhang’s work
IEEE TRANSACTION ON MULTIMEDIA, 2012
Abstract—Increasingly developed social sharing websites, like Flicker and
YouTube, allow users to create, share, annotate and comment Medias. The
large-scale user-generated meta-data not only facilitate users in sharing and
organizing multimedia content, but provide useful information to improve media
retrieval and management. Personalized search serves as one of such examples
where the web search experience is improved by generating the returned list
according to the modified user search intents. In this paper, we exploit the
social annotations and propose a novel framework simultaneously considering the
user and query relevance to learn to personalized image search. The basic
premise is to embed the user preference and query-related search intent into
user-specific topic spaces. Since the users’ original annotation is too sparse
for topic modeling, we need to enrich users’ annotation pool before
user-specific topic spaces construction. The proposed framework contains two
components: 1) A Ranking based Multi-correlation Tensor Factorization model is
proposed to perform annotation prediction, which is considered as users’
potential annotations for the images; 2) We introduce
User-specific Topic Modeling to map the query relevance and user preference
into the same user-specific topic space. For performance evaluation, two
resources involved with users’ social activities are employed. Experiments on a
large-scale Flicker data set demonstrate the effectiveness of the
proposed method.
IEEE/ACM CLOUD
COMPUTING June 2011
Abstract — Infrastructure as a Service (IaaS) cloud computing has
revolutionized the way we think of acquiring resources by introducing a simple
change: allowing users to lease computational resources from the cloud
provider’s datacenter for a short time by deploying virtual machines (VMs) on
these resources. This new model raises new challenges in the design and
development of IaaS middleware. One of those challenges is the need to deploy a
large number (hundreds or even thousands) of VM instances simultaneously. Once
the VM instances are deployed, another challenge is to simultaneously take a
snapshot of many images and transfer them to persistent storage to support
management tasks, such as suspend-resume and migration. With datacenters
growing rapidly and configurations becoming heterogeneous, it is important to
enable efficient concurrent deployment and snapshotting that are at the same
time hypervisor independent and ensure a maximum compatibility with different
configurations. This paper addresses these challenges by proposing a virtual
file system specifically optimized for virtual machine image storage. It is
based on a lazy transfer scheme coupled with object versioning that handles
snapshotting transparently in a hypervisor-independent fashion, ensuring high portability
for different configurations. Large-scale experiments on hundreds of nodes
demonstrate excellent performance results: speedup for concurrent VM
deployments ranges from a factor of 2 up to 25, with a reduction in bandwidth
utilization of as much as 90%.
IEEE TRANSACTIONS ON
CLOUD COMPUTING April 10-15, 2011
Abstract — Cloud Computing has great potential of providing robust
computational power to the society at reduced cost. It enables customers with
limited computational resources to outsource their large computation workloads
to the cloud, and economically enjoy the massive computational power,
bandwidth, storage, and even appropriate software that can be shared in a
pay-per-use manner. Despite the tremendous benefits, security is the primary
obstacle that prevents the wide adoption of this promising computing model,
especially for customers when their confidential data are consumed and produced
during the computation. Treating the cloud as an intrinsically insecure
computing platform from the viewpoint of the cloud customers, we must design
mechanisms that not only protect sensitive information by enabling computations
with encrypted data, but also protect customers from malicious behaviors by
enabling the validation of the computation result. Such a mechanism of general
secure computation outsourcing was recently shown to be feasible in theory, but
to design mechanisms that are practically efficient remains a very challenging
problem. Focusing on engineering computing
and optimization tasks, this paper investigates secure outsourcing of widely
applicable linear programming (LP) computations. In order to achieve practical
efficiency, our mechanism design explicitly decomposes the LP computation
outsourcing into public LP solvers running on the cloud and private LP
parameters owned by the customer. The resulting flexibility allows us to
explore appropriate security/ efficiency trade off via higher-level
abstraction of LP computations than the general circuit representation. In
particular, by formulating private data owned by the customer for LP problem as
a set of matrices and vectors, we are able to develop a set of efficient
privacy-preserving problem transformation techniques, which allow customers to
transform original LP problem into some arbitrary one while protecting
sensitive input/output information. To validate the computation result, we
further explore the fundamental duality theorem of LP computation and derive
the necessary and sufficient conditions that correct result must satisfy. Such
result verification mechanism is extremely efficient and incurs close-to-zero
additional cost on both cloud server and customers. Extensive security analysis
and experiment results show the immediate practicability of our mechanism
design.
IEEE 2011: Enabling Data
Hiding for Resource Sharing in Cloud Computing Environments Based on DNA
Sequences
2011 IEEE World
Congress on Services
Abstract — The main target of this paper is to propose an algorithm to
implement data hiding in DNA sequences to increase the confidentiality and
complexity by using software point of view in cloud computing environments. By
utilizing some interesting features of DNA sequences, the implementation of a
data hiding is applied in cloud. The algorithm which has been proposed here is
based on binary coding and complementary pair rules. Therefore, DNA reference
sequence is chosen and a secret data M is hidden into it as well. As result of
applying some steps, M´´´ is come out to upload to cloud environments. The
process of identifying and extracting the original data M, hidden in DNA
reference sequence begins once clients decide to use data. Furthermore, security
issues are demonstrated to inspect the complexity of the algorithm.
IEEE 2011: A Business
Model for Cloud Computing Based on a Separate Encryption and Decryption Service
IEEE
2011 Information Science and Applications International
Conference April 2011
Abstract — Enterprises usually store data in internal storage and install
firewalls to protect against intruders to access the data. They also
standardize data access procedures to prevent insiders to disclose the
information without permission. In cloud computing, the data will be stored in
storage provided by service providers. Service providers must have a viable way
to protect their clients’ data, especially to prevent the data from disclosure
by unauthorized insiders. Storing the data in encrypted form is a common method
of information privacy protection. If a cloud system is responsible for both
tasks on storage and encryption/decryption of data, the system administrators
may simultaneously obtain encrypted data and decryption keys. This allows them
to access information without authorization and thus poses a risk to
information privacy. This study proposes a business model for cloud computing
based on the concept of separating the encryption and decryption service from
the storage service. Furthermore, the party responsible for the data storage
system must not store data in plaintext, and the party responsible for data
encryption and decryption must delete all data upon the computation on
encryption or decryption is complete. A CRM (Customer Relationship Management)
service is described in this paper as an example to illustrate the proposed
business model. The exemplary service utilizes three cloud systems, including
an encryption and decryption system, a storage system, and a CRM application
system. One service provider operates the encryption and decryption system
while other providers operate the storage and application systems, according to
the core concept of the proposed business model. This paper further includes
suggestions for a multi-party Service-Level Agreement (SLA) suitable for use in
the proposed business model.
IEEE 2011 Transaction Parallel
and Distributed Systems, IEEE Transactions on May 2011
Abstract — Cloud computing is the long dreamed vision of computing as a
utility, where users can remotely store their data into the cloud so as to
enjoy the on-demand high quality applications and services from a shared pool
of configurable computing resources. By data outsourcing, users can be relieved
from the burden of local data storage and maintenance. Thus, enabling public
audit ability for cloud data storage security is of critical importance so that
users can resort to an external audit party to check the integrity of
outsourced data when needed. To securely introduce an effective third party
auditor (TPA), the following two fundamental requirements have to be met: 1)
TPA should be able to efficiently audit the cloud data storage without
demanding the local copy of data, and introduce no additional on-line burden to
the cloud user. Specifically, our contribution in this work can be summarized
as the following three aspects:
1) We motivate the
public auditing system of data storage security in Cloud Computing and provide
a privacy-preserving auditing protocol, i.e., our scheme supports an external
auditor to audit user’s outsourced data in the cloud without learning knowledge
on the data content.
2) To the best of our
knowledge, our scheme is the first to support scalable and efficient public
auditing in the Cloud Computing. In particular, our scheme achieves batch
auditing where multiple delegated auditing tasks from different users can be
performed simultaneously by the TPA.
3) We prove the
security and justify the performance of our proposed schemes through concrete
experiments and comparisons with the state-of-the-art.
IEEE 2011 Conference
on Computer Communications April 2011
Abstract — The end of this decade is marked by a paradigm shift of the
industrial information technology towards a pay-per-use service business model
known as cloud computing. Cloud data storage redefines the security issues
targeted on customer’s outsourced data (data that is not stored/retrieved from
the costumers own servers). In this work we observed that, from a customer’s
point of view, relying upon a solo SP for his outsourced data is not very
promising. In addition, providing better privacy as well as ensure data
availability, can be achieved by dividing the user’s data block into data
pieces and distributing them among the available SPs in such a way that no less
than a threshold number of SPs can take part in successful retrieval of the
whole data block. In this paper, we propose a secured cost-effective
multi-cloud storage (SCMCS) model in cloud computing which holds an economical
distribution of data among the available SPs in the market, to provide
customers with data availability as well as secure storage. Our results show
that, our proposed model provides a better decision for customers according to
their available budgets.
IEEE TRANSACTIONS ON
KNOWLEDGE AND DATA ENGINEERING, March 2011
Abstract— A spatial preference query ranks objects based on the
qualities of features in their spatial neighborhood. For example, using a real
estate agency database of flats for lease, a customer may want to rank the
flats with respect to the appropriateness of their location, defined after
aggregating the qualities of other features (e.g., restaurants, cafes,
hospital, market, etc.) within their spatial neighborhood. Such a neighborhood
concept can be specified by the user via different functions. It can be an explicit
circular region within a given distance from the flat. Another intuitive
definition is to consider the whole spatial domain and assign higher weights to
the features based on their proximity to the flat. In this paper, we formally
define spatial preference queries and propose appropriate indexing techniques
and search algorithms for them. Extensively evaluation of our methods on both
real and synthetic data reveals that an optimized branch-and-bound solution is
efficient and robust with respect to different parameters.
IEEE
INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE AND TECHNOLOGY (IJEST)3 MAR 2011
Abstract— Visual Cryptography is a special encryption technique to hide
information in images in such a way that it can be decrypted by the human
visual system. The benefit of the visual secret sharing scheme is in its
decryption process where without any complex cryptographic computation
encrypted data is decrypted using Human Visual System (HVS). But the encryption
technique needs cryptographic computation to divide the image into a number of
parts let n. k-n secret sharing scheme is a special type of Visual
Cryptographic technique where at least a group of k shares out of n shares
reveals the secret information, less of it will reveal no information. In our
paper we have proposed a new k-n secret sharing scheme for color image where
encryption (Division) of the image is done using Random Number generator.
IEEE TRANSACTIONS ON
INFORMATION FORENSICS AND SECURITY, JUNE 2011
Abstract— A visual cryptography scheme (VCS) is a kind of secret sharing
scheme which allows the encoding of a secret image into shares distributed to
participants. The beauty of such a scheme is that a set of qualified
participants is able to recover the secret image without any cryptographic
knowledge and computation devices. An extended visual cryptography scheme
(EVCS) is a kind of VCS which consists of meaningful shares (compared to the
random shares of traditional VCS). In this paper, we propose a construction of
EVCS which is realized by embedding random shares into meaningful covering shares,
and we call it the embedded EVCS. Experimental results compare some of the
well-known EVCSs proposed in recent years systematically, and show that the
proposed embedded EVCS has competitive visual quality compared with many of the
well-known EVCSs in the literature. In addition, it has many specific
advantages against these well-known EVCSs, respectively.
IEEE TRANSACTIONS ON
INSTRUMENTATION AND MEASUREMENT, OCTOBER 2011
Abstract—Digital image libraries and other multimedia databases have been
dramatically expanded in recent years. In order to effectively and precisely
retrieve the desired images from a large image database, the development of a
content-based image retrieval (CBIR) system has become an important research
issue. However, most of the proposed approaches emphasize on finding the best
representation for different image features. Furthermore, very few of the
representative works well consider the user’s subjectivity and preferences in
the retrieval process. In this paper, a user-oriented mechanism for CBIR method
based on an interactive genetic algorithm (IGA) is proposed. Color attributes
like the mean value, the standard deviation, and the image bitmap of a color
image are used as the features for retrieval. In addition, the entropy based on
the gray level co-occurrence matrix and the edge histogram of an image is also
considered as the texture features. Furthermore, to reduce the gap between the
retrieval results and the users’ expectation, the IGA is employed to help the
users identify the images that are most satisfied to the users’ need.
Experimental results and comparisons demonstrate the feasibility of the
proposed approach
IEEE INTERNATIONAL
SYMPOSIUM ON APPLIED MACHINE INTELLIGENCE AND INFORMATICS • JANUARY, 2011
Abstract— The content based image retrieval (CBIR) is one of the most
popular, rising research areas of the digital image processing. Most of the
available image search tools, such as Google Images and Yahoo! Image search,
are based on textual annotation of images. In these tools, images are manually
annotated with keywords and then retrieved using text-based search methods. The
performances of these systems are not satisfactory. The goal of CBIR is to
extract visual content of an image automatically, like color, texture, or
shape. This paper aims to introduce the problems and challenges concerned with
the design and the creation of CBIR systems, which is based on a free hand
sketch (Sketch based image retrieval – SBIR). With the help of the existing
methods, describe a possible solution how to design and implement a task
spesi_c descriptor, which can handle the informational gap between a sketch and
a colored image, making an opportunity for the efficient search hereby. The
used descriptor is constructed after such special sequence of preprocessing
steps that the transformed full color image and the sketch can be compared. We
have studied EHD, HOG and SIFT. Experimental results on two sample databases
showed good results. Overall, the results show that the sketch based system
allows users an intuitive access to search-tools. The SBIR technology can be
used in several applications such as digital libraries, crime prevention, and
photo sharing sites. Such system has great value in apprehending suspects
and indentifying victims in forensics and law enforcement. A possible
application is matching a forensic sketch to a gallery of mug shot images. The
area of retrieve images based on the visual content of the query picture
intensi_ed recently, which demands on the quite wide methodology spectrum on
the area of the image processing
IEEE
TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, MARCH-APRIL 2011
Abstract— Anonymizing networks such as Tor allow
users to access Internet services privately by using a series of routers to
hide the client’s IP address from the server. The success of such networks,
however, has been limited by users employing this anonymity for abusive
purposes such as defacing popular Web sites. Web site administrators routinely
rely on IP-address blocking for disabling access to misbehaving users, but
blocking IP addresses is not practical if the abuser routes through an
Anonymizing network. As a result, administrators block all known exit nodes of
Anonymizing networks, denying anonymous access to misbehaving and
behaving Users alike. To address this problem, we present Nymble, a system
in which servers can “blacklist” misbehaving users, thereby blocking users
without compromising their anonymity. Our system is thus agnostic to different
servers’ definitions of misbehavior—servers can blacklist users for whatever
reason, and the privacy of blacklisted users is maintained.
IEEE
2011: SAT: A Security Architecture Achieving Anonymity and Traceability in
Wireless Mesh Networks
IEEE
TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, MARCH-APRIL 2011
Abstract— Anonymity has received increasing
attention in the literature due to the users’ awareness of their privacy
nowadays. Anonymity provides protection for users to enjoy network services
without being traced. While anonymity-related issues have been extensively
studied in payment-based systems such as e-cash and peer-to-peer (P2P) systems,
little effort has been devoted to wireless mesh networks (WMNs). On the other
hand, the network authority requires conditional anonymity such that
misbehaving entities in the network remain traceable. In this paper, we propose
a security architecture to ensure unconditional anonymity for honest users and
traceability of misbehaving users for network authorities in WMNs. The proposed
architecture strives to resolve the conflicts between the anonymity and
traceability objectives, in addition to guaranteeing fundamental security
requirements including authentication, confidentiality, data integrity, and no
repudiation. Thorough analysis on security and efficiency is incorporated, demonstrating
the feasibility and effectiveness of the proposed architecture
IEEE TRANSACTIONS ON
DEPENDABLE AND SECURE COMPUTING , MAY-JUNE 2011
Abstract— Active worms pose major security threats to the Internet.
This is due to the ability of active worms to propagate in an automated fashion
as they continuously compromise computers on the Internet. Active worms evolve
during their propagation and thus pose great challenges to defend against them.
In this paper, we investigate a new class of active worms, referred to as
Camouflaging Worm (C-Worm in short). The C-Worm is different from traditional
worms because of its ability to intelligently manipulate its scan traffic
volume over time. Thereby, the C-Worm camouflages its propagation from existing
worm detection systems based on analyzing the propagation traffic generated by
worms. We analyze characteristics of the C-Worm and conduct a comprehensive
comparison between its traffic and non-worm traffic (background traffic). We
observe that these two types of traffic are barely distinguishable in the time
domain. However, their distinction is clear in the frequency domain, due to the
recurring manipulative nature of the C-Worm. Motivated by our observations, we
design a novel spectrum-based scheme to detect the C-Worm. Our scheme uses the
Power Spectral Density (PSD) distribution of the scan traffic volume and its
corresponding Spectral Flatness Measure (SFM) to distinguish the C-Worm traffic
from background traffic. Using a comprehensive set of detection metrics and
real-world traces as background traffic, we conduct extensive performance
evaluations on our proposed spectrum-based detection scheme. The performance
data clearly demonstrates that our scheme can effectively detect the C-Worm
propagation. Furthermore, we show the generality of our spectrum-based scheme
in effectively detecting not only the C-Worm, but traditional worms as well.
IEEE Conference on
Information Theory; May 2011
Abstract— Many wireless communication systems such as IS- 54, enhanced data
rates for the GSM evolution (EDGE), worldwide interoperability for microwave
access (WiMAX) and long term evolution (LTE) have adopted low-density
parity-check (LDPC), tail-biting convolution, and turbo codes as the forward
error correcting codes (FEC) scheme for data and overhead channels. Therefore,
many efficient algorithms have been proposed for decoding these codes. However,
the different decoding approaches for these two families of codes usually lead
to different hardware architectures. Since these codes work side by side in
these new wireless systems, it is a good idea to introduce a universal decoder
to handle these two families of codes. The present work exploits the
parity-check matrix (H) representation of tail biting convolution and turbo codes,
thus enabling decoding via a unified belief propagation (BP) algorithm. Indeed,
the BP algorithm provides a highly effective general methodology for devising
low-complexity iterative decoding algorithms for all convolution code classes
as well as turbo codes. While a small performance loss is observed when
decoding turbo codes with BP instead of MAP, this is offset by the lower
complexity of the BP algorithm and the inherent advantage of a unified decoding
Architecture
IEEE PROCEEDINGS OF
ICETECT 2011
Abstract— Intrusion detection plays an important role in the area of
security in WSN. Detection of any type of intruder is essential in case of WSN.
WSN consumes a lot of energy to detect an intruder. Therefore we derive an
algorithm for energy efficient external and internal intrusion detection. We
also analyse the probability of detecting the intruder for heterogeneous WSN.
This paper considers single sensing and multi sensing intruder detection
models. It is found that our experimental results validate the theoretical
results.
IEEE 2011: Network Coding
Based Privacy Preservation against Traffic Analysis in Multi-Hop Wireless
Networks
IEEE TRANSACTIONS ON
WIRELESS COMMUNICATIONS, MARCH 2011
Abstract— Privacy threat is one of the critical issues in multihop
wireless networks, where attacks such as traffic analysis and flow tracing can
be easily launched by a malicious adversary due to the open wireless medium.
Network coding has the potential to thwart these attacks since the
coding/mixing operation is encouraged at intermediate nodes. However, the
simple deployment of network coding cannot achieve the goal once enough packets
are collected by the adversaries. On the other hand, the coding/mixing nature precludes
the feasibility of employing the existing privacy-preserving techniques, such
as Onion Routing. In this paper, we propose a novel network coding based
privacy-preserving scheme against traffic analysis in multihop wireless
networks. With homomorphism encryption on Global Encoding Vectors (GEVs), the
proposed scheme offers two significant privacy-preserving features, packet flow
intractability and message content confidentiality, for efficiently thwarting
the traffic analysis attacks. Moreover, the proposed scheme keeps the random
coding feature, and each sink can recover the source packets by inverting the
GEVs with a very high probability. Theoretical analysis and simulative
evaluation demonstrate the validity and efficiency of the proposed scheme
IEEE 2011: Scalable and
Cost-Effective Interconnection of Data-Center Servers Using Dual Server Ports
IEEE/ACM TRANSACTIONS
ON NETWORKING
Abstract— The goal of data-center networking is to interconnect a large
number of server machines with low equipment cost while providing high network
capacity and high bisection width. It is well understood that the current
practice where servers are connected by a tree hierarchy of network switches
cannot meet these requirements. In this paper, we explore a new
server-interconnection structure. We observe that the commodity server machines
used in today’s data centers usually come with two built-in Ethernet ports, one
for network connection and the other left for backup purposes. We believe that
if both ports are actively used in network connections, we can build a
scalable, cost-effective interconnection structure without either the expensive
higher-level large switches or any additional hardware on servers. We design
such a networking structure called FiConn. Although the server node degree is
only 2 in this structure, we have proven that FiConn is highly scalable to
encompass hundreds of thousands of servers
with low diameter and high bisection width. We have developed a
low-overhead traffic-aware routing mechanism to improve effective link
utilization based on dynamic traffic state. We havealso proposed how to
incrementally deploy FiConn.
IEEE TRANSACTIONS ON WIRELESS
COMMUNICATIONS, JANUARY 2011
Abstract— In this paper, the performance of the ALOHA and CSMA MAC
protocols are analyzed in spatially distributed wireless networks. The main
system objective is correct reception of packets, and thus the analysis is
performed in terms of outage probability. In our network model, packets
belonging to specific transmitters arrive randomly in space and time according
to a 3-D Poisson point process, and are then transmitted to their intended
destinations using a fully-distributed MAC protocol. A packet transmission is
considered successful if the received SINR is above a predefined threshold for
the duration of the packet. Accurate bounds on the outage probabilities are
derived as a function of the transmitter density, the number of back offs and
retransmissions, and in the case of CSMA, also the sensing threshold. The
analytical expressions are validated with simulation results. For
continuous-time transmissions, CSMA with receiver sensing (which involves
adding a feedback channel to the conventional CSMA protocol) is shown to yield
the best performance. Moreover, the sensing threshold of CSMA is optimized. It
is shown that introducing sensing for lower densities (i.e., in sparse
networks) is not beneficial, while for higher densities (i.e., in dense
networks), using an optimized sensing threshold provides significant gain.
IEEE TRANSACTIONS ON
MOBILE COMPUTING, Jan 2011
Abstract— Monitoring personal locations with a potentially entrusted
server poses privacy threats to the monitored individuals. To this end, we
propose a privacy-preserving location monitoring system for wireless sensor
networks. In our system, we design two in network location anonymization
algorithms, namely, resource- and quality-aware algorithms
that aim to enable the system to provide high quality location monitoring
services for system users, while preserving personal location privacy. Both
algorithms rely on the well established k-anonymity privacy concept, that is, a
person is indistinguishable among k persons, to enable trusted sensor nodes to
provide the aggregate location information of monitored persons for our system.
Each aggregate location is in a form of a monitored area A along with the
number of monitored persons residing in A, where A contains at least k persons.
The resource-aware algorithm aims to minimize communication and computational
cost, while the quality-aware algorithm aims to maximize the accuracy of the
aggregate locations by minimizing their monitored areas. To utilize the
aggregate location information to provide location monitoring services, we use
a spatial histogram approach that estimates the distribution of the monitored
persons based on the gathered aggregate location information. Then the
estimated distribution is used to provide location monitoring services through
answering range queries. We evaluate our system through simulated experiments.
The results show that our system provides high quality location monitoring
services for system users and guarantees the location privacy of the monitored
persons
IEEE 2011: ROC: Resilient
Online Coverage for Surveillance Applications
IEEE/ACM TRANSACTIONS
ON NETWORKING, FEBRUARY 2011
Abstract— We consider surveillance applications in which sensors are
deployed in large numbers to improve coverage fidelity. Previous research has
studied how to select active sensor covers (subsets of nodes that cover the
field) to efficiently exploit redundant node deployment and tolerate unexpected
node failures. Little attention was given to studying the tradeoff between
fault tolerance and energy efficiency in sensor coverage. In this work, our
objectives are twofold. First, we aim at rapidly restoring field coverage under
unexpected sensor failures in an energy-efficient manner. Second, we want to
flexibly support different degrees of redundancy in the field without needing
centralized control. To meet these objectives, we propose design guidelines for
applications that employ distributed cover-selection algorithms to control the
degree of redundancy at local regions in the field. In addition, we develop a
new distributed technique to facilitate switching between active covers without
the need for node synchronization. Distributed cover selection protocols can be
integrated into our referred to as “resilient online coverage” (ROC) framework.
A key novelty in ROC is that it allows every sensor to control the degree of
redundancy and surveillance in its region according to current network
conditions. We analyze the benefits of ROC in terms of energy efficiency and
fault tolerance. Through extensive simulations, we demonstrate the
effectiveness of ROC in operational scenarios and compare its performance with
previous surveillance techniques.
IEEE
TRANSACTIONS ON VEHICULAR TECHNOLOGY, JUNE 2011
Abstract— We address cooperative caching in wireless
networks, where the nodes may be mobile and exchange information in a
peer-to-peer fashion. We consider both cases of nodes with large and
small-sized caches. For large-sized caches, we devise a strategy where nodes,
independent of each other, decide whether to cache some content and for how
long. In the case of small-sized caches, we aim to design a content replacement
strategy that allows nodes to successfully store newly received information
while maintaining the good performance of the content distribution system.
Under both conditions, each node takes decisions according to its perception of
what nearby users may store in their caches and with the aim of differentiating
its own cache content from the other nodes’. The result is the creation of
content diversity within the nodes neighborhood so that a requesting user
likely finds the desired information nearby. We simulate our caching algorithms
indifferent ad hoc network scenarios and compare them with other caching
schemes, showing that our solution succeeds in creating the desired content
diversity, thus leading to a resource-efficient information access.
IEEE
TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, MARCH 2010
Abstract— Efficiency and privacy are two
fundamental issues in moving object monitoring. This paper proposes a
privacy-aware monitoring (PAM) framework that addresses both issues. The
framework distinguishes itself from the existing work by being the first to
holistically address the issues of location updating in terms of monitoring
accuracy, efficiency, and privacy, particularly, when and how mobile clients
should send location updates to the server. Based on the notions of safe region
and most probable result, PAM performs location updates only when they would
likely alter the query results. Furthermore, by designing various client update
strategies, the framework is flexible and able to optimize accuracy, privacy,
or efficiency. We develop efficient query evaluation/reevaluation and safe
region computation algorithms in the framework. The experimental results show
that PAM substantially outperforms traditional schemes in terms of monitoring
accuracy, CPU cost, and scalability while achieving close-to-optimal communication
cost.
IEEE TRANSACTIONS ON
DEPENDABLE AND SECURE COMPUTING, MARCH 2010
Abstract— Intrusion detection faces a number of challenges; an
intrusion detection system must reliably detect malicious activities in a
network and must perform efficiently to cope with the large amount of network
traffic. In this paper, we address these two issues of Accuracy and Efficiency
using Conditional Random Fields and Layered Approach. We demonstrate that high
attack detection accuracy can be achieved by using Conditional Random Fields
and high efficiency by implementing the Layered Approach. Experimental results
on the benchmark KDD ’99 intrusion data set show that our proposed system based
on Layered Conditional Random Fields outperforms other well-known methods such
as the decision trees and the naïve Bayes. The improvement in attack detection
accuracy is very high, particularly, for the U2R attacks (34.8 percent
improvement) and the R2L attacks (34.5 percent improvement). Statistical Tests
also demonstrate higher confidence in detection accuracy for our method.
Finally, we show that our system is robust and is able to handle noisy data
without compromising performance.
IEEE 2010: A Scheduling
and Call Admission Control Algorithm for WiMax Mesh Network with Strict QoS Guarantee
IEEECOMMUNICATION
SYSTEMS AND NETWORKS (COMSNETS), 2010
Abstract— The IEEE 802.16 standard (commonly known as WiMax) has emerged
as a broadband wireless technology covering large geographical area while
providing high speed data rates with native Quality of Service (QoS) support.
In this paper, we study mesh mode of operation of WiMax with centralized
scheduling for UGS and RTPS service classes. We briefly discuss two known
routing algorithms (to find the path of a request) and propose two new routing
algorithms. We present a novel scheduling and Call Admission Control (CAC)
algorithm for UGS and RTPS service class. The scheduling and CAC algorithm make
sure that each and every packet of admitted request strictly meets its delay
and jitter constraints. Since an RTPS request can change its data rate
requirement, we propose an efficient algorithm for computing extra bandwidth
request for RTPS service class which perform much better in terms of average
packet delay and packet drop percentage compared to some simple algorithms. We
present simulation results comparing our scheduling algorithm with two other
algorithms proposed in the literature. We also present results which show that
our scheduling does provide strict QoS guarantee for every packet.
IEEE SICE ANNUAL CONFERENCE, 2008
Abstract— Computer vision and
recognition plays more important role on intelligent control. In this paper,
the image segmentation is done by the resonance theory. Resonance algorithm is
an unsupervised method to generate the region (feature space) from similar pixels
(feature vectors) in an image. It tolerates gradual changes of texture to some
extent for image segmentation. The purpose of the paper is to propose a
practical method for image segmentation, which is always the first step to
control a real intelligent control system.
No comments:
Post a Comment