Wednesday, 17 July 2013

IEEE 2013 :DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency in Wireless Mobile Networks


IEEE 2013 Transactions on Mobile Computing

Technology - Available in Java and Dot Net

Abstract—This paper proposes distributed cache invalidation mechanism (DCIM), a client-based cache consistency scheme that is implemented on top of a previously proposed architecture for caching data items in mobile ad hoc networks (MANETs), namely COACS, where special nodes cache the queries and the addresses of the nodes that store the responses to these queries. We have also previously proposed a server-based consistency scheme, named SSUM, whereas in this paper, we introduce DCIM that is totally client-based. DCIM is a pull-based algorithm that implements adaptive time to live (TTL), piggybacking, and perfecting, and provides near strong consistency capabilities. Cached data items are assigned adaptive TTL values that correspond to their update rates at the data source, where items with expired TTL values are grouped in validation requests to the data source to refresh them, whereas unexpired ones but with high request rates are prefetched from the server. In this paper, DCIM is analyzed to assess the delay and bandwidth gains (or costs) when compared to polling every time and push-based schemes. DCIM was also implemented using ns2, and compared against client-based and server-based schemes to assess its performance experimentally. The consistency ratio, delay, and overhead traffic are reported versus several variables, where DCIM showed to be superior when compared to the other systems.


Index Terms—Cache consistency, data caching, client-based, invalidation, MANET, TTL

IEEE 2013: Generation of Personalized Ontology Based on Consumer Emotion and Behavior Analysis


IEEE 2013 Transaction on Affective Computing

Technology - Available in  Java and Dot Net

Abstract—The relationships between consumer emotions and their buying behaviors have been well documented. Technology-savvy consumers often use the web to find information on products and services before they commit to buying. We propose a semantic web usage mining approach for discovering periodic web access patterns from annotated web usage logs which incorporates information on consumer emotions and behaviors through self-reporting and behavioral tracking. We use fuzzy logic to represent real-life temporal concepts (e.g., morning) and requested resource attributes (ontological domain concepts for the requested URLs) of periodic pattern based web access activities. These fuzzy temporal and resource representations, which contain both behavioral and emotional cues, are incorporated into a Personal Web Usage Lattice that models the user’s web access activities. From this, we generate a Personal Web Usage Ontology written in OWL, which enables semantic web applications such as personalized web resources recommendation. Finally, we demonstrate the effectiveness of our approach by presenting experimental results in the context of personalized web resources recommendation with varying degrees of emotional influence. Emotional influence has been found to contribute positively to adaptation in personalized recommendation

IEEE 2013 :Winds of Change From Vendor Lock-In to the Meta Cloud


IEEE 2013 Internet Computing

Technology - Available in Java and Dot Net

The cloud computing paradigm has achieved widespread adoption in recent years. Its success is due largely to customers’ ability to use services on demand with a pay-as-you go pricing model, which has proved convenient in many respects. Low costs and high flexibility make migrating to the cloud compelling. Despite its obvious advantages, however, many companies hesitate to “move to the cloud,” mainly because of concerns related to service availability, data lock-in, and legal uncertainties.1 Lock in is particularly problematic. For one thing, even though public cloud availability is generally high, outages still occur.2 Businesses locked into such a cloud are essentially at a standstill until the cloud is back online. Moreover, public cloud providers generally don’t guarantee particular service level agreements (SLAs)3 — that is, businesses locked into a cloud have no guarantees that it will continue to provide the required quality of service (QoS). Finally, most public cloud providers’ terms of service let that provider unilaterally change pricing at any time. Hence, a business locked into a cloud has no mid- or long term control over its own IT costs.

IEEE 2013 :Toward a reliable, secure and fault tolerant smart grid state estimation in the cloud


IEEE 2013 Technologies on Innovative Smart Grid

Technology -Available in Java and Dot Net


Abstract—The collection and prompt analysis of synchrophasor measurements is a key step towards enabling the future smart power grid, in which grid management applications would be deployed to monitor and react intelligently to changing conditions. The potential exists to slash inefficiencies and to adaptively reconfigure the grid to take better advantage of renewable, coordinate and share reactive power, and to reduce the risk of catastrophic large-scale outages. However, to realize this potential, a number of technical challenges must be overcome. We describe a continuously active, timely monitoring framework that we have created, architected to support a wide range of grid-control applications in a standard manner designed to leverage cloud computing. Cloud computing systems bring significant advantages, including an elastic, highly available and cost-effective compute infrastructure well-suited for this application. We believe that by showing how challenges of reliability, timeliness, and security can be addressed while leveraging cloud standards, our work opens the door for wider exploitation of the cloud by the smart grid community. This paper characterizes a PMU-based state-estimation application, explains how the desired system maps to a cloud architecture, identifies limitations in the standard cloud infrastructure relative to the needs of this use case, and then shows how we adapt the basic cloud platform options with sophisticated technologies of our own to achieve the required levels of usability, fault tolerance, and parallelism

IEEE 2013: Security and Privacy Enhancing Multi-Cloud Architectures



IEEE 2013 Transaction on Dependable and Secure Computing

Technology - Available in Java and Dot Net

Abstract—Security challenges are still amongst the biggest obstacles when considering the adoption of cloud services. This triggered a lot of research activities, resulting in a quantity of proposals targeting the various cloud security threats. Alongside with these security issues the cloud paradigm comes with a new set of unique features which open the path towards novel security approaches, techniques and architectures. This paper provides a survey on the achievable security merits by making use of multiple distinct clouds simultaneously. Various distinct architectures are introduced and discussed according to their security and privacy capabilities and prospects.


Index Terms—Cloud; Security; Privacy; Multi-Cloud; Application Partitioning; Tier Partitioning; Data Partitioning; Multi-party Computation

IEEE 2013: Scalable and Secure Sharing of Personal Health Records in Cloud Computing using Attribute-based Encryption

IEEE 2013 Transactions on Parallel & Distributed System


Technology- Available in Java and DotNet

Abstract—Personal health record (PHR) is an emerging patient-centric model of health information exchange, which is often outsourced to be stored at a third party, such as cloud providers. However, there have been wide privacy concerns as personal health information could be exposed to those third party servers and to unauthorized parties. To assure the patients’ control over access to their own PHRs, it is a promising method to encrypt the PHRs before outsourcing. Yet, issues such as risks of privacy exposure, scalability in key management, flexible access and efficient user revocation, have remained the most important challenges toward achieving fine-grained,  cryptographically enforced data access control. In this paper, we propose a novel patient-centric framework and a suite of mechanisms for data access control to PHRs stored in semi-trusted servers. To achieve fine-grained and scalable data access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patient’s PHR file. Different from previous works in secure data outsourcing, we focus on the multiple data owner scenario, and divide the users in the PHR system into multiple security domains that greatly reduces the key management complexity for owners and users. A high degree of patient privacy is guaranteed simultaneously by exploiting multi-authority ABE. Our scheme also enables dynamic modification of access policies or file attributes, supports efficient on-demand user/attribute revocation and break-glass access under emergency scenarios. Extensive analytical and experimental results are presented which show the security, scalability and efficiency of our proposed scheme.



IEEE 2013: Govcloud: Using Cloud Computing in Public Organizations



IEEE 2013 TECHNOLOGY AND SOCIETY 

Technology - Available in Java and Dot Net

Governments are facing reductions in ICT budgets just as users are increasing demands for

electronic services. One solution announced aggressively by vendors is cloud computing. Cloud computing is not a new technology, but as described by Jackson [1] is a new way of offering services, taking into consideration business and economic models for providing and consuming ICT services. Here we explain the impact and benefits for public organizations of cloud services and explore issues of why governments are slow to literature does not cover this subject in detail, especially for European organizations.

IEEE 2013:Dynamic Resource Allocation using Virtual Machines for Cloud Computing Environment

IEEE 2013 Transactions on Parallel & Distributed Systems

Technology - Available in Java and Dot net

Abstract—Cloud computing allows business customers to scale up and down their resource usage based on needs. Many of the touted gains in the cloud model come from resource multiplexing through virtualization technology. In this paper, we present a system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. We introduce the concept of “skewness” to measure the unevenness in the multi-dimensional resource utilization of a server. By minimizing skewness, we can combine different types of workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.