General assignment information#
To open an Assignment in JupyterHub, click the launch button (🚀) at the top of the Assignment page of this site.
You can also do this for lecture notebooks.
All lecture slides and homework templates can be found under
class_materials/. The contents of this directory will be automatically updated from the GitHub repository, but should keep any changes you make.
Read the instructions carefully. Like word problems from math class, they are very specific in what they are asking for.
Spot check your results. If you are transforming data from a previous Step, compare the results, do a handful of the calculations manually, etc. to ensure that the results are correct.
Don’t repeat yourself (DRY). If you find yourself copying and pasting code within a notebook, there’s probably a better way to do it.
Avoid hard-coding values. Don’t rely on things like row numbers or column order being stable, in case the dataset were to be updated.
Open the JupyterHub file browser.
Navigate to the folder your notebook is in.
From Python, use
JupyterHub has a disk storage limit of 1GB (a.k.a. 1,024 MB or 1,048,576 KB) across all your files, and a memory limit of 3GB.
Reducing data size#
You can make data smaller before uploading by filtering it through:
Ensure all the outputs are visible and the notebook is cleaned up.
This is a good time to run the notebook end-to-end with
Restart and run all(⏩).
See general scoring criteria.
Leave your name off the notebook filename and the notebook itself, as assignments are graded anonymously.
Export the notebook as a PDF. From the Jupyter interface, go to:
PDF via LaTeX (PDF)
Glance through the PDF to ensure everything is showing up as you intend.
What you see is what the instructors will see.
If one of the Homeworks: Upload the PDF to the Brightspace Assignment.
If the Final Project:
When you’re ready to have it formally re-graded, please resubmit through the same Assignment in Brightspace.
After the resubmission deadline passes for each Assignment, the solutions will be posted in
Note: In-class exercises will not be graded.
Plotly charts/maps not appearing: Include the boilerplate code.
import plotly.io as pio pio.renderers.default = "notebook_connected+pdf"
500 error: You may be outputting too much data. Try reducing your output (in the Jupyter sense) to smaller subsets.
choropleth_mapbox(), nothing appears on the map: Make sure:
locationscorresponds to the DataFrame column name and
featureidkeyis set to
properties.<property name>matching the GeoJSON
The column and the GeoJSON properties have values that match
.loc[condition, "column name"] = …. More details.
input()stuck: Jupyter can be a bit buggy when dealing with interactive input. If it seems to get stuck or you aren’t seeing a prompt when you’d expect one, try clicking the
If you get an error of
Disk is full /
No space left on device: You’ve used all the available disk space. If you do fill it up, your server may not be able to start again (
spawn failed). You’ll need to delete one or more large files that you don’t need anymore:
If you server is started already (you’re seeing notebooks), click
Stop My Server.
Go to start your server again (visit JupyterHub).
Troubleshooting Only - Clear Disk.
Look at the
File sizeJupyter shows in the file browser.
Delete one or more large files.
If you’re still using those datasets, make them smaller.
Error loading notebook#
This error can happen if you tried to output a lot of data in tables/charts. Steps to resolve:
Open the JupyterHub) file browser
Run the following, changing the path at the end to match whatever notebook needs to be repaired:
jupyter nbconvert --to notebook --clear-output ~/class_materials/hw_<NUMBER>.ipynb
If you’re confused by these instrucions, download the notebook file and email to the instructor.
The kernel is the place where Python is installed and the code is actually executing, in the cloud somewhere.
Python [conda env:python-public-policy]is selected as the kernel.
Shows in the top right of the notebook interface
Python [conda env:python-public-policy]
If your kernel is repeatedly crashing, you’re probably running out of memory.
Make sure you aren’t loading data sets you don’t need.
If loading a new dataset, make it smaller
Close kernels you aren’t using from the Running page.