Big Data Management & Analytics

Overview

Take the next step in your career and become an independent, critically-minded big data specialist with this Level 9 Postgraduate Diploma.



Why Study Big Data at Griffith College?

Designed specifically to address a growing need in the industry, the Postgraduate Diploma in Big Data Management and Analytics at Griffith College is a 1-year programme, delivered on two evenings per week and Saturdays.



Building upon students' knowledge of computing science with the aim to create big data specialists, as a graduate of this course, you will:

• Obtain specialist knowledge and skills essential for a career in Big Data Management and Analytics.

• Establish an analytical mindset necessary for independent academic and professional research.

• Gain a practical understanding of the appropriate design and implementation strategies used in the development of Big Data solutions.

• Develop a team player attitude necessary to communicate problems, ideas and solutions to all levels of the industrial team.

• Build upon your knowledge of supporting topics in the area of Computing Science.



Course Highlights

• Emerging discipline with huge job opportunities

• Develop highly sought-after skills

• Fully aligned with industry needs

• Access to innovative tools and technologies

• A dedicated experienced lecturing team.

Subjects taught

This programme contains eight taught modules, four of which are delivered over each of the two semesters.



Modules

Big Data Analytics

?This module aims to equip the learner with a range of most relevant topics that pertain to contemporary analysis practices, and are foundational to the emerging field of big data analytics. Learners are guided through the theoretical and practical differences between traditional datasets and Big Data datasets. An overview of the initial collection of data will be explored for multiple data sources. A formal grounding in analytical statistics is a major part of the module curriculum. Learners are expected to apply principles of statistical analytics to solve problems and inform decision making. Learners achieve this through developing knowledge and understanding of statistical analytics techniques and principles while applying these techniques and principles in typical real world scenarios.



Information Retrieval and Web Search

This module introduces the learner to the concepts of information retrieval (IR) and web search. They encounter various techniques used in IR and means of evaluating their performance. Learners also gain an exposure to the practical design of large- scale IR systems that are commonly used in the web search domain. Current trends in IR, such as collection and data fusion are introduced through the use of academic papers.



Concurrent and Parallel Programming

The future of microprocessor development is based around multiprocessor multicore architectures that will deliver the performance required for future application demands. The difficulty for software developers is to write programmes that harness the power of these new architectures. As a result, the fundamental aim of this module is to teach the learner how to write software for these machines.



The general module aims are to provide the learner with an understanding of the need for, and advantages of, concurrent and parallel systems; to master a new programming paradigm that is different from that of the single threaded one; a description of how processes and threads are managed in multiprocessor, multi core machines. The learner will achieve an understanding and mastery of the many classical problems arising with concurrent and parallel tasks; an awareness of the need for such issues as fairness, process synchronisation, deadlock avoidance, etc.; the ability to write concurrent and parallel programmes to solve real world problems; an understanding of multi-core architectures and their significance for the implementation of parallel systems; a mastery of notations to express solutions to parallel problems.



Cloud Computing

This module aims to introduce the learner to the concept of cloud computing and how it differs from the client server model of computation that is seen on the web today. Cloud computing applications are charged on a per use basis i.e. clients only pay for what they have used. Many companies are using cloud computing to offload some of their work onto these clouds as a means of saving on software and hardware cost.



Due to this new model of computation we require new methods of developing and creating software. At the end of this module the learner will be able to understand the different models of cloud applications (IaaS, PaaS, SaaS) and determine which is the right one for the task they need and will also understand the role virtualisation plays in the cloud and how it will impact their applications. Learners will also be able to develop basis web applications and deploy them to the cloud.



Big Data Management

This module aims to equip the learner with the skills to implement, from the batch to the speed layer, an end-to-end Big Data storage system using the most current technologies. As a grounding to the subject area, the learner will be guided through an overview of the traditional approach of data storage and access, with all theory grounded in real-world technological examples. As technologies have progressed, the availability of data has increased dramatically. The volumes of data dealt with in modern systems are far beyond what traditional systems can handle. During this module, the main failure points of traditional systems with regard to this level of data will be explored. Each layer of the Lambda Architecture will be explored in detail from theory through to implementation via current technologies. At the lowest layer, the module will demonstrate how to store Big Data in the fact-based model in a distributed file system, namely Hadoop Distributed File System (HDFS). This layer is then connected to a read-oriented database, such as MongoDB or ElephantDB, depending on the data type stored, to create the Serving Layer of the Lambda Architecture. Finally, this will be connected to a light-weight database that can handle high-volume reads and writes to implement the high-level Speed Layer of the Lambda Architecture. All practical work will be done on real-world data to emphasise the need for Big Data systems.



Data Mining Algorithms and Techniques

This module aims to give learners a thorough understanding of different data mining techniques, algorithms and tools necessary to infer information from large datasets. The learners will understand the underpinning concepts and principles that make these algorithms work. The learners will encounter and will implement various data mining techniques.



The learners will be expected to take different sets of data and apply the appropriate data mining techniques to them to infer information. From this they should be able to hypothesise information about their datasets and test those hypotheses in a controlled scientific manner.



Applied Data Science

This module aims to introduce the learner to the fundamental principles of data science and equips them with “data-analytic thinking” necessary for extracting useful knowledge and business value from the relevant datasets. The module introduces the learner to the principles underpinning the processes and strategies necessary to solve real-world problems through data science techniques. The module focuses on data science concepts as applied to practical real-world problems and aims to teach learners the underlying concepts behind data science and most importantly how to approach and be successful at problem-solving. Problem-solving and information discovery strategies will be developed via in-depth analysis of existing Big Data implementations and case studies. As most of the information discovered from large datasets is of direct use to business decisions, both reporting and visualization are an important element of this module.



Research Methods

This module serves to significantly deepen the learner's research skills, both in relation to the module related assignments and later in the completion of a dissertation/dissertation by practice. Specifically, it extends the ability of self-directed learners by equipping them with the appropriate vocabulary for reflecting on, critiquing and evaluating their own work and that of others. Throughout the module, learners are required to engage in a number of research methodologies and current research issues and trends in computing science. The module also addresses the need for good project management skills and techniques for the successful delivery of any project.

Entry requirements

Candidates applying for this course should have a 2.2 Level 8 honours degree in Computing Science, or a 2.2 Higher Diploma in Computing or related discipline or international equivalent and/or relevant work experience. Those that have relevant work experience may be required to attend a virtual meeting with the Programme Director to establish suitability.

Duration

1 year. Full-Time / Part-Time.

Provisionally, the course will be held on Tuesday and Thursday evenings, and during the day on Saturdays.

Enrolment dates

Intake Dates:

We run two intakes for this course, commencing as follows:

Autumn: September*

Spring: February*



*subject to sufficient numbers.

Post Course Info

Progression

Graduates of the Postgraduate Diploma in Science in Big Data Management and Analytics course have the option to continue their studies at Griffith College by progressing to the MSc in Big Data Management and Analytics.

More details
  • Qualification letters

    PgDip

  • Qualifications

    Postgraduate Diploma (Level 9 NFQ)

  • Attendance type

    Full time,Part time,Evening,Daytime,Weekend

  • Apply to

    Course provider