Home » Archivo » archive 2011/2012

Archive 20112012

Jump second level menu

Corsi Laurea

    Corsi Laurea Magistrale

    Seminari

    • Deduplication: Security issues and some solutions - 2012-09-06 - Roberto Di Pietro (University of Roma Tre)

      Abstract: Deduplication is a technique used to reduce the amount of storage needed by service providers. It is based on the intuition that several users may want (for different reasons) to store the same content. Hence, storing a single copy of these files is sufficient. Albeit simple in theory, the implementation of this concept introduces many security risks. In this talk we address the most severe one: an adversary (who possesses only a fraction of the original file, or even just partially colluding with a rightful owner) claiming to possess such a file. The provided contributions are manifold: first, we introduce a novel Proof of Ownership (POW) scheme that has all features of the state-of-the-art solution while incurring only a fraction of the overhead experienced by the competitor; we also propose viable optimization techniques that further improve the scheme's performance. Finally, the quality of our proposal is supported by extensive benchmarking. Further research directions will be also discussed. (Joint work with Alessandro Sorniotti, IBM Research, Zurich) Lenght: 45' + Q&A

    • Privacy in Social Networks - 2012-09-14 - Roland Yap (National University of Singapore)

      Abstract: The popularity and size of online social networks means the social graph contains valuable data about relationships. Such graph data may be sensitive. Thus, there is a need to protect the data from privacy leaks. Some social networks are essentially public, e.g. Facebook, where much of the information about the users is available in public profiles except for a certain number of private profiles which are closed. From the perspective of the operator of the social network, public information and crawlability are needed to support the basic utility and services on top of the social network. But from the users perspective, they may want to maintain privacy. We propose policies where the owner of the social network can tradeoff between these two conflicting goals. We also show that much of the link information of private users can be leaked under certain policies. We experiment with real world social network graphs and show that the owner of the graph can employ policies which can meet particular tradeoffs under different crawlers. Another class of social networks are mostly private, e.g. LinkedIn, information about the links of a node is only available to friends. In the link privacy attack, it turns out that by bribing or compromising a small number of nodes (users) in the social network graph, it is possible to obtain complete link information for a much larger fraction of other non-bribed nodes in the graph. This can constitute a significant privacy breach. We explain why the link privacy attack is effective. We present several strategies including a privacy control to users can reduce the effect of the attack.

    • The Cyber-Physical Attacker - 2012-05-17 - Roberto Vigo (Danmarks Tekniske Universitet)

      Abstract: Cyber-Physical Systems (CPSs) are increasingly exploited in the realization of critical infrastructures as well as general-purpose applications. The proliferation of such systems demands their nature and properties to be thoroughly investigated, but still an effort is needed in order to tackle the diverse issues addressed in literature. Among the number of questions that still are unanswered about CPSs, a crucial position is held by security related matters, since compromising such a system could create economic, social, and political damages altogether. For a system cannot be secured without precisely defining the threats it suffers from, we address the definition of an attacker scenario for CPSs. In particular, we describe the parameters that define such an attacker, and we compare the expressiveness of the resulting picture to the well-known Dolev-Yao model and to other more recent frameworks. Finally, we assume a protocol perspective on the devised scenario, in order to study to what extent existing analysis techniques could be applied to the verification of security properties of CPSs.

    • Representing Style: the contribution of Markov Constraints - 2012-04-27 - Francois Pachet (SONY Computer Science Laboratory, Paris)

      Abstract: The motivation of this work is to generate sequences (typically music or text) that imitate a given -style-. A traditional approach to represent style is to use statistical methods such as Markov processes. Markov processes capture adequately short-term dependencies in temporal sequences, but are very difficult to control. Imposing arbitrary properties on Markov chains is indeed paradoxical as it often goes in the way of the basic Markov hypothesis of limited temporal dependency. This talk introduces our work on Markov Constraints: a new class of global constraints whose goal is to reformulate Markov processes in the framework of CSP, to enable users to generate sequences -in the style of- a given corpus, while satisfying arbitrary control criteria. We will give examples in music and text generation, and relate this work to the ERC funded Flow Machines projects. References: http://www.csl.sony.fr/~pachet/flow_machines.html -Pachet, F. and Roy, P. Markov constraints: steerable generation of Markov sequences. Constraints, 16(2):148-172 March 2011. - Pachet, F., Roy, P. and Barbieri, G. Finite-Length Markov Processes with Constraints. Proceedings of the 22nd International Joint Conference on Artificial Intelligence, IJCAI, pages 635-642, Barcelona, Spain, July 2011

    • Named Data Network Security - 2012-03-30 - Alberto Compagno (Università di Padova)

      Abstract: Created in the '60s, Internet became one of the most important success of the history of modern communication. However, it currently shows the signs of age. While network technology still considers only connection between hosts, network use has evolved over years from host-to-host communication to content retrieval and distribution. CDN and P2P were created to keep up with this new network use. However, they are only a way to fill the lack from network technology to network use: they are not part of the Internet frastructure. In addition, Internet architecture presents issues in terms of security and privacy. In fact, when it was created no particular attention was paid to security and privacy. A number of protocols and applications to enforce user privacy have been created. However, no one is a complete solution because of the weakness of the current infrastructure. Tor, for example, is a well known solution that try to fix the problems of current Internet privacy. However, it has limitations in terms of performances, and his implementation is not ready for an extensive use. Hence, the next step is to create a new Internet infrastructure that adheres to the current network use and it enforces user privacy. Named Data Network is an initial attempt that moves the emphasis from host to data, completely changing the current Internet basic concepts, and introducing important privacy features. In this talk, we will introduce this new concept of Internet infrastructure, together with ANDaNA---a solution that provides anonymity in Named Data Network.

    • Batch resolution protocols for wireless and RFID networks - 2012-03-30 - Andrea Zanella (Dip. Ing. dell'Informazione - UNIPD)

      Abstract: A batch resolution algorithm (BRA) is a channel access policy used by a group of nodes (the batch) that simultaneously generate a packet for a common receiver. The aim is to minimize the batch resolution interval (BRI), i.e., the time it takes for all nodes in the batch to successfully deliver their packet. Most of existing BRAs require immediate feedback after each packet transmission, and typically assume the feedback time is negligible. This conjecture, however, fails to apply in practical high rate wireless systems, so that the classical performance analysis of BRAs may be overoptimistic. In this talk we give a quick overview of the classical tree-based BRAs and, then, we present a novel BRA, named Adaptive Batch Resolution Algorithm with Deferred Feedback (ABRADE), which waives the immediate feedback approach in favor of a deferred feedback method, based on a framed ALOHA access scheme. The frame length is optimized by using a dynamic programming technique in order to minimize the BRI, under the assumption that the batch size is known. This assumption is then removed by enhancing ABRADE with a batch size estimate module. The new algorithm, called ABRADE+, is compared against the best performing BRAs based on the immediate feedback paradigm, showing better performance both in case of partial and no prior knowledge of the batch multiplicity.

    • Efficient Source Authentication Schemes in Wireless Sensor Networks - 2012-03-28 - Wafa Ben Jaballah (Unversity of Bordeux)

      Abstract: Wireless sensor networks (WSN) are being widely deployed in military, healthcare and commercial environments. Since sensor networks pose unique challenges, traditional security methods, commonly used in enterprise networks, cannot be directly applied. In particular, broadcast source authentication is a critical security service in wireless sensor networks since it allows senders to broadcast messages to multiple receivers in a secure way. Public-key cryptography based solutions such as Elliptic Curve Cryptography (ECC) and Identity Based Cryptography (IBC) have been proposed but they all suffer from severe energy depletion attacks, resulting from a high computational and communication overheads. We present novel symmetric-key-based authentication schemes that exhibit low broadcast authentication overhead and thus avoiding the problem flaws inherent to the public key cryptography based schemes. Our schemes are built upon the integration of multi-level μTesla protocol, staggered authentication and the Bloom Filter. Experimental results demonstrate that our authentication schemes are very efficient in terms of authentication delay, authentication probability, delay of forged packets in the receiver’s buffer, memory, and energy consumption related to both computation and communication.

    • Toward the creation of a Green Content Management System - 2012-02-09 - Matteo Ciman

      Abstract: Il sito delle lauree in Informatica informatica.math.unipd.it è stato aggiornato implementando un particolare CMS che minimizza il lavoro del server ed il tempo di risposta per i visitatori. In questo seminario verranno presentate le diverse tecniche e strategie adottate.

    • CRePE: a System for Enforcing Fine-Grained Context-Related Policies on Android - 2012-01-12 - Mauro Conti and Earlence Fernandes (Univ. di Padova and Vrije Universiteit Amsterdam, NL)

      Abstract: Current smartphone systems allow the user to use only marginally contextual information to specify the behaviour of the applications: this hinders the wide adoption of this technology to its full potential. We fill this gap by proposing CRePE, a fine-grained Context-Related Policy Enforcement System for Android. While the concept of context-related access control is not new, this is the first work that brings this concept into the smartphone environment. In particular, in our work a context can be defined by: the status of variables sensed by physical (low level) sensors, like time and location; additional processing on these data via software (high level) sensors; or particular interactions with the users or third parties. CRePE allows context-related policies to be set (even at runtime) by both the user and authorized third parties locally (via an application) or remotely (via SMS, MMS, Bluetooth, and QR-code). A thorough set of experiments shows that our full implementation of CRePE has a negligible overhead in terms of energy consumption, time, and storage, making our system ready for a production environment. In this talk, we will present CRePE, together with a demo session and the discussion of the most interesting insights in the programming of CRePE (which resulted in a modified version of Android).

    • What can we do in CP but not in SAT or ILP? - 2011-12-16 - Prof. Toby Walsh (NICTA and University of New South Wales)

      Abstract: The talk explores the connections between Integer Linear Programming (ILP), propositional satisfiability (SAT) and Constraint Programming (CP). The focus of the talk is on global constraints like AllDifferent. These are one of the key distinguishing features of constraint programming. I describe recent work on simulating the actions of global constraints with simple decompositions that can be implemented using SAT clauses or ILP models. Based on powerful lower bounds from circuit complexity, I argue that there are, however, some things that cannot be effectively simulated in ILP or SAT. The conclusion is that not everything can be done in SAT or ILP and we do in fact need some of the powerful algorithmic techniques provided by CP.

    • Machine Intelligence, Generalized Rough Sets and Granular Mining: Concepts, Features and Applications - 2011-11-25 - Prof. Sankar K. Pal (Indian Statistical Institute, India)

      Abstract: Different components of machine intelligence and their characteristics are explained. The role of rough sets in uncertainty handling and granular computing is described. The significance of its integration with fuzzy sets, called rough-fuzzy computing, as a stronger paradigm for uncertainty handling, is explained. Different applications of rough granules, significance of f-granulation and certain emerging issues in their performance are stated. Generalized rough sets using the concept of fuzziness in granules as well as in sets are defined both for equivalence and tolerance relations. Different tasks such as case generation, class-dependent rough-fuzzy granulation for classification, rough-fuzzy clustering and defining entropy and various ambiguity measures for image analysis are then addressed in this regard, explaining the nature and characteristics of granules used therein. While the method of case generation with variable reduced dimension is useful for mining data sets with large dimension and size, class dependent granulation coupled with neighborhood rough sets for feature selection is efficient in modeling overlapping classes. Significance of a new measure, called ''dispersion'' of classification performance, which focuses on confused classes for higher level analysis, is explained in this regard. Superiority of rough-fuzzy clustering is illustrated for determining bio-bases (c-medoids) in encoding protein sequence for analysis. Image ambiguity measures, which take into account both the fuzziness in boundary regions, and the rough resemblance among nearby gray levels and nearby pixels, are defined for various image analysis tasks. Merits of incorporating the concept of rough granules in gray level in addition to fuzziness for computing entropy are extensively demonstrated for image segmentation problem, as an example. The talk concludes with stating the future directions of research and challenges, and the relevance to natural computing.