What is a dissertation
- It is a structured piece of writing that develops a clear line of thought (an ‘argument’) in response to a central question or proposition (‘thesis’).
- A dissertation is an extended piece of work, usually divided into chapters, and containing a significantly more detailed examination of your subject matter and evidence than is the case for most essays.
- Because you usually have much more responsibility in choosing your research topic, and for sourcing your supporting materials, your dissertation provides evidence of your ability to carry out highly independent study and research.
- You are typically expected to be clear about the methodology (investigative procedures and rules) you have used to gather and evaluate your evidence. This aspect of producing a dissertation has much greater emphasis than in a typical essay.
- Those of you undertaking analysis of quantitative data must similarly ensure that you adhere to the methodological requirements expected within your academic discipline and that you utilise the appropriate software. You must satisfy yourself as to these requirements within your subject area.
1. The Green Cloud
Cloud computing requires the management of distributed resources across a heterogeneous computing environment. These resources typically are, from the user viewpoint, “always on”. While techniques exist for distributing the compute resources and giving a viewpoint of the user of “always on”, this has the potential to be highly inefficient in terms of energy usage. Over the past few years there has been much activity in building “green” (energy efficient) equipment (computers, switches, storage), and energy efficient data centres. However, there has been little work in trying to model and demonstrate a capability that allows a heterogeneous, distributed compute cloud to use a management policy that also tries to be as energy efficient as possible. This project will explore the use of virtualisation in system and network resources in order to minimise energy usage whilst still meeting the service requirements and operational constraints of a cloud.
2. Denial of Service Issues in Cloud Computing
Denial of Service Issues in Cloud Computing
As the Cloud offers dynamically provisioned resource allocation, what happens under denial of service attacks? Does the Cloud simply keep wasting more and more resources? Are there novel forms of DoS that are particularly dangerous for Clouds? Can denial of service protection be built into the Cloud, or must it be dealt with, as at present, at the Internet level?
3. Cloud VV&T and Metrics
Verification, Validation and Testing are all necessary to basic system evaluation and adoption but when the system and data sources are distributed, these tasks are invariably done in an ad hoc or random manner. Normal test strategies for testing code, applications or architecture may not be applicable in a cloud; software developed for a non distributed environment may not work in the same way in a cloud and multiple thread, network and security protocols may inhibit normal working. The future of testing will be different under new environments; novel system testing strategies may be required to facilitate verification and new metrics will be required to describe levels of system competence and satisfaction.There are many areas of research within the topic of Cloud VV&T from formal verification through to empirical research and metric validation of multi part or parallel analysis. Testing can be applied to systems, security, architecture models and other constructs within the Cloud environment. Failure analysis, taxonomies, error handling and recognition are all related areas of potential research.
4. Cloud Security
A major concern in Cloud adoption is security and the US Government has announced a Cloud Computing Security Group in acknowledgement of the expected problems such networking will entail. However, basic network security is flawed at best. Even with modern protocols, hackers and worms can attack a system and create havoc within a few hours. Within a Cloud, the prospects for incursion are many and the rewards are rich. Architectures and applications must be protected and security must be appropriate, emergent and adaptive. Should security be centralized or decentralized? Should one body manage security services? What security is necessary and sufficient? How do we deal with emergent issues?
There are many areas of research within the topic of Cloud Security from formal aspects to empirical research outlining novel techniques. Cloud Privacy and Trust are further related areas of potential research. An investigation of the various security features and protocols offered to applications running on clouds e.g. Azure, AWS etc. is also another one. A comparative analysis of the features offered by several clouds, results based on actual deployment of secure applications on these cloud platforms. The study will explore secure protocols, authentication mechanisms (e.g. federated identity) etc.
5. Data migration in the cloud
The cost and time to move data around is currently one of the major bottlenecks in the cloud. Users with large volumes of data therefore may wish to specify where that data should be made available, when it may be moved around, etc. Furthermore, regulations, such as the data protection regulations, may place constraints on the movement of data and the national jurisdictions where it may be maintained. The aim of this project is to investigate the practical issues which affect data migration in the cloud and to propose mechanisms to specify policies on data migration and to use these as a basis for a data management system.
6. An Experimental Laboratory in the Cloud
Computer Science is highly suited to experimental science. Unfortunately, many Computer Scientists are very bad at conducting high quality experiments. The goal of this project is to make experiments better by using the Cloud in a number of ways. The core idea is that experiments are formed as artifacts, for example as a virtual machine than can be put into the cloud. For example, a researcher might want to experiment on the speed of their algorithms in different operating systems. They would make a number of different virtual machines containing each version, which would be sent to the cloud, the experiment run, and the results collected. As well as the results of running the experiment being stored in the cloud, the experiment itself is also there, making the cloud into an experimental repository as well as the laboratory. This enables reproducibility of experiments, a key concept that has too often been ignored. While using the cloud, the project can feed back into research on clouds by investigating how experiments involving the cloud itself can be formulated for use in our new Experimental Laboratory.
7. Harvesting Unused Resources and Ad hoc Cloud
The idea of an ad-hoc cloud is to deploy cloud services over an organization’s existing infrastructure, rather than using dedicated machines within data centres. Key to this approach is the ability to manage the use of computational and storage resources on individual machines, by the cloud infrastructure, to the extent that the cloud is sufficiently non-intrusive that individuals will permit its operation on their machines. This project will investigate how this may be achieved
The aim of this project is to investigate how underused computing resources within an enterprise may be harvested and harnessed to improve return on IT investment. In particular, the project seeks to increase efficiency of use of general purpose computers such as office machines and lab computers. As a motivating example, a University or College operates thousands of machines. In aggregate, their unused processing and storage resources represent a major untapped computing resource. The project will make harvested resources available in the form of ad-hoc clouds, the composition of which varies dynamically according to supply of resources and demand for cloud services.
8. Scalability in the Cloud
To achieve scale in the cloud for a web application there are 2 options – scale up (bigger box bigger VM) or scale out (more boxes or VMs). Amazon EC2 provides an auto-scale feature. Azure does not just yet. The recommendation seems to scale out, not up. But when to scale is also a concern i.e. when there is load on the system and that can be determined by performance counters, or based on historical load information. Investigate two PaaS clouds (e.g. Azure and Google App Engine), and see what features they have which facilitate the scaling of compute resources. Implement scale up versus scale out to see the impact on performance on a sample application. Also investigate what design requirements are there for an application for it to be able to scale out.
9. Session State Management in the Cloud
An investigation of how session state management can be implemented in the cloud for web applications. For example in Azure standard session state management for ASP.Net applications in done in memory, which will not work where an application is scaled out over multiple instances, since session state is not sticky. Alternative session state management involving storing the state in persistent store need to be used instead e.g. Azure tables or SQL Azure. Compare and contrast various session state management techniques on different cloud platforms e.g. Azure, Google App Engine etc. Compare and contrast features of various approaches.
10. Caching in the Cloud
An investigation of caching solutions for web applications in the PaaS clouds e.g. Microsoft Azure and Google Apps Engine. Examine all types of caching that the cloud provides e.g. CDN in Azure etc. but also output caching, and the caching of application data e.g. ASP.Net caching in Azure, distributed caching using memcached or Windows Azure AppFabric Caching. Compare and contrast various caching solutions. Investigate importance of caching for applications running in the cloud. Examine performance improvements that can be obtained from use of such technologies. Compare and contrast the features of the various caching solutions.
11. Federated Databases in the Cloud
Investigate the use of federated databases to provide scalability in the cloud. In Azure at the moment there is a 150BG limit on the size of an SQL Azure database. For databases larger than this a federated solution is required i.e. SQL Azure Federation. This is based on horizontal partitioning of data i.e. sharding. Investigate federated database solutions that commercial PaaS clouds provide. Use a case study to see the impact for the database and associated applications when a database is federated in the cloud. Investigate query performance.
190total visits,1visits today