Links fixed and OSF repository made public 4Oct18.
Adult C. elegans, alive vs dead. Approx 3 frames per second. Column labels are in the images. Note that the plates are rotates 90 degrees CW in the images.
Columns 1 and 2 contain live adult worms and eggs
Columns 3 and 4 contain 50% live worm sample, 50% dead worm sample by volume
Columns 5 and 6 contain dead worms. Eggs in these samples were present before killing.
Worms were killed by two cycles of freezing for 20 minutes and thawing.
Please note that it is very difficult to kill worms without causing mechanical damage. Many nematicidal compounds cause a paralysis and worms frequently recover once the nematicidal agent is washed away. Also note that we have observed paralysis of worms in various conformations (straight, curved, coiled etc), so paralysis not as straightforward a phenotype as people often imagine.
Images for this experiment are here (moved to OSF).
Time course of paralysis under aldicarb treatment. Aldicarb is a pesticide, formerly in widespread agricultural use, that causes paralysis of adult C. elegans. Conveniently, aldicarb also stimulates egg-laying. Therefore, over the course of this experiment, you should be able to observe two chemically-induced phenotypes:
- Increasing paralysis
- Increasing egg count
I have included seven different doses of aldicarb and a negative control, each in 6 replicate wells:
A1:A6 8 mM Aldicarb,
B1:B6 6 mM Aldicarb
C1:C6 4 mM Aldicarb
D1:D6 2 mM Aldicarb
E1:E6 1 mM Aldicarb
F1:F6 0.5 mM Aldicarb
G1:G6 0.25mM Aldicarb
H1:H6 No aldicarb: negative control.
I captured this experiment at 1 sample per minute for 3 hours, to give you some appreciation of the timescales over which most events occur in this assay. We would gather much less data for a screening experiment. Here, “a sample” is composed of two images, separated by a 500 ms interval. By comparing each pair of images (<sample_number>.tif and <sample_number>b.tif), we can derive a movement score.
Here is a plot of our simple movement scores, note that I would normally calculate these scores live, during experiment runtime:
There is a numpy array containing this data here (well A1 is at index [0,0,t], well H6 is at [7, 5, t]. I have included our movement data images here; blue denotes invariant pixels, green denotes changed pixels between the two successive images. For these movement data images, there is a single image for each timepoint, containing movement information derived from the two successive images in the raw data.