数据库代写|DB 代写

DATA 2001 Data Science

Preparation
Form a group of 2-3 students (within your enrolled tutorial where possible, or with your tutor’s permission otherwise).
• Initial data loading and cleaning should be completed in Python, then SQL should be used to merge datasets and produce
scores. This code should be collated in a neat, concise Jupyter notebook file.
• This unit’s Week 8 tutorial covers instructions for managing spatial data and the installation of PostGIS (the spatial extension
of PostgreSQL) on your local database server.
• A shapefile of the SA2 digital boundaries can be accessed on the ABS website here. Use these, alongside the data sources
on Canvas, to complete the tasks below.

Tasks
Task 1
Import all datasets (clean if required) into your PostgreSQL server, using a well-defined data schema. These sources include:
• SA2 Regions: Statistical Area Level 2 (SA2) digital boundaries (feel free to filter this down to the ”Greater Sydney” GCC).
• Businesses: Number of businesses by industry and SA2 region, reported by turnover size ranges.
• Stops: Locations of all public transport stops (train and bus) in General Transit Feed Specification (GTFS) format.
• Polls: Locations (and other premises details) of polling places for the 2019 Federal election.
• Schools: Geographical regions in which students must live to attend primary, secondary and future Government schools.
• Population: Estimates of the number of people living in each SA2 by age range (for ”per capita” calculations).

• Income: Total earnings statistics by SA2 (for later correlation analysis).

Task 2
Compute a score for how ”bustling” each individual neighbourhood is according to the formula provided on the next page, where
S is the sigmoid function, z is the normalised z-score, and ’young people’ are defined as anyone aged 0-19. Feel free to only
calculate scores for SA2 regions with a population of at least 100, and you are welcome to extend the scoring function however
you deem necessary, so long as rational explanation is provided (e.g. other mathematical standardisation techniques, mitigating
the impact of outliers, calculating some metrics per-capita or per-sqkm, etc).
As a small means of encouraging extensions of the basic suggested scoring function, note that the zbusiness definition is intentionally
broad - select a cross-section of specific industries within the provided dataset (e.g. ”Retail Trade”) that you believe will be the
best reflection of how ”bustling” the area is (describe your rationale in the report) and use this to calculate the component. 

Task 3
Extend the score by sourcing one additional dataset for each group member, and then incorporating all new datasets into your
scoring function. For full marks, at least one dataset should be of spatial data, and at least one should be of a type not used so
far in this assignment (e.g. JSON, XML, or collated via web scraping). Almost any subject matter is permissible, so long as it can
be justified as relevant to the calculation of our ”bustling” metric (e.g. public facilities, other census statistics, local wildlife, etc).
For either version of your scoring function (or both!), the following subtasks should also be achieved:
Visualise your score in an engaging way, and summarise key results in a table (ideally including a useful map-overlay
visualisation, or an interactive graph).
• Include in-depth analysis into your results. Note interesting findings, discuss their limitations, and summarise key conclusions.
• Determine if there is any correlation between your score and the median income of each region.
• Ensure at least one useful index (ideally spatial) has been used for your calculations.

Deliverables
All deliverables are due in Week 12, no later than 11:00pm on Tuesday the 14th of May.
1. PDF Report: This should be no more than 6 pages (plus an optional appendix), in which you document your data integration
steps and the main outcomes of your analysis. Your document should contain the following:
Dataset Description: What are your data sources? How did you obtain and pre-process the data?
Database Description: How was your schema established (preferably a database diagram included), and how was the
data integrated? What index(es) did you create and why?
Score Analysis: Describe the formula used to compute your score for each region (including how it was extended with
extra datasets), and give an overview of your results. This section will likely be the longest and most detailed.
Correlation Analysis: How well does your score correlate with the median income of each SA2 region? Are these results
surprising? Make any final observations about the usefulness or limitations of your scores.
Additional Analysis: A final section for DATA2901 students, based on their extra requirements.
2. Jupyter Notebook: A file containing your entire data workflow.
3. Short Demo: A brief conversation with your tutor (not a formal presentation) in the Week 12 tutorials (or Week 13, if
necessary). This allows time to discuss the decisions behind your work, and is not a marked component, but is mandatory for
any marks to be received.
The marking rubric will be available on Canvas.
Late submission penalty: -5% of the available marks per day late; minimum 0% after 5 days.
Please submit a single zip file containing all deliverables electronically in Canvas, one for each group.
Students must retain electronic copies of their submitted assignment files and databases, as the unit coordinator may request to
inspect these files before marking of an assignment is completed. If these assignment files are not made available to the unit
coordinator when requested, the marking of this assignment may not proceed.