Vale Polly Samuel (1963-2017): On Dying & Death

Those of you who follow me on Twitter will know some of this already, but I’ve been meaning to write here for quite some time about all this. It’s taken me almost two years to write, because it’s so difficult to find the words to describe this. I’ve finally decided to take the plunge and finish it despite feeling it could be better, but if I don’t I’ll never get this out.

September 2016 was not a good month for my wonderful wife Polly, she’d been having pains around her belly and after prodding the GP she managed to get a blood test ordered. They had suspected gallstones or gastritis but when the call came one evening to come in urgently in the next morning for another blood test we knew something was up. After the blood test we were sent off for an ultrasound of the liver and with that out of the way went out for a picnic on Mount Dandenong for a break. Whilst we were eating we got another phone call from the GP, this time to come and pick up a referral for an urgent MRI. We went to pick it up but when they found out Polly had already eaten they realised they would need to convert to a CT scan. A couple of phone calls later we were booked in for one that afternoon. That evening was another call to come back to see the GP. We were pretty sure we knew what was coming.

The news was not good, Polly had “innumerable” tumours in her liver. Over 5 years after surgery and chemo for her primary breast cancer and almost at the end of her 5 years of tamoxifen the cancer had returned. We knew the deal with metastatic cancer, but it was still a shock when the GP said “you know this is not a curable situation”. So the next day (Friday) it was right back to her oncologist who took her of the tamoxifen immediately (as it was no longer working) and scheduled chemotherapy for the following Monday, after an operation to install a PICC line. He also explained about what this meant, that this was just a management technique to (hopefully) try and shrink the tumours and make life easier for Polly for a while. It was an open question about how long that while would be, but we knew from the papers online that she had found looking at the statistics that it was likely months, not years, that we had. Polly wrote about it all at the time, far more eloquently than I could, with more detail, on her blog.

Chris, my husband, best pal, and the love of my life for 17 years, and I sat opposite the oncologist. He explained my situation was not good, that it was not a curable situation. I had already read that extensive metastatic spread to the liver could mean a prognosis of 4-6 months but if really favorable as long as 20 months.

The next few months were a whirlwind of chemo, oncology, blood tests, crying, laughing and loving. We were determined to talk about everything, and Polly was determined to prepare as quickly as she could for what was to come. They say you should “put your affairs in order” and that’s just what she did, financially, business-wise (we’d been running an AirBNB and all those bookings had to be canceled ASAP, plus of course all her usual autism consulting work) and personally. I was so fortunate that my work was so supportive and able to be flexible about my hours and days and so I could be around for these appointments.

Over the next few weeks it was apparent that the chemo was working, breathing & eating became far easier for her and a follow up MRI later on showed that the tumours had shrunk by about 75%. This was good news.

In October 2016 was Polly’s 53rd birthday and so she set about planning a living wake for herself, with a heap of guests, music courtesy of our good friend Scott, a lot of sausages (and other food) and good weather. Polly led the singing and there was an awful lot of merriment. Such a wonderful time and such good memories were made that day.

Polly singing at her birthday party in 2016

That December we celebrated our 16th wedding anniversary together at a lovely farm-stay place in the Yarra Valley before having what we were pretty sure was our last Christmas together.

Polly and Chris at the farm-stay for our wedding anniversary

But then in January came the news we’d been afraid of, the blood results were showing that the first chemo had run out of steam and stopped working, so it was on to chemo regime #2. A week after starting the new regime we took a delayed holiday up to the Blue Mountains in New South Wales (we’d had to cancel previously due to her diagnosis) and spent a long weekend exploring the area and generally having fun.

Polly and Chris at Katoomba, NSW

But in early February it was clear that the second line chemo wasn’t doing anything, and so it was on to the third line chemo. Polly had also been having fluid build up in her abdomen (called ascites) and we knew they would have to start to draining that at some point, February was that point; we spent the morning of Valentines Day in the radiology ward where they drained around 4 litres from her! The upside from that was it made life so much easier again for her. We celebrated that by going to a really wonderful restaurant that we used for special events for dinner for Valentines, something we hadn’t thought possible that morning!

Valentine's Day dinner at Copperfields

Two weeks after that we learned from the oncologist that the third line chemo wasn’t doing anything either and he had to give us the news that there wasn’t any treatment he could offer us that had any prospect of helping. Polly took that in her usual pragmatic and down-to-earth way, telling the oncologist that she didn’t see him as the reaper but as her fairy godfather who had given her months of extra quality time and bringing a smile to his and my face. She also asked whether the PICC line (which meant she couldn’t have a bath, just shower with a protective cover over it) could come out and the answer was “yes”.

The day before that news we had visited the palliative ward there for the first time, Polly had a hard time with hospitals and so we spent time talking to the staff, visiting rooms and Polly all the time reframing it to reduce and remove the anxiety. The magic words were “hotel-hospital”, which it really did feel like. We talked with the oncologist about how it all worked and what might happen.

We also had a home palliative team who would come and visit, help with pain management and be available on the phone at all hours to give advice and assist where they could. Polly felt uncertain about them at first as she wasn’t sure what they would make of her language issues and autism, but whilst they did seem a bit fazed at first by someone who was dealing with the fact that they were dying in such a blunt and straightforward manner things soon smoothed out.

None of this stopped us living, we continued to go out walking at our favourite places in our wonderful part of Melbourne, continued to see friends, continued to joke and dance and cry and laugh and cook and eat out.

Polly on minature steam train

Oh, and not forgetting putting a new paved area in so we could have a little outdoor fire area to enjoy with friends!

Chris laying paving slabs for fire area Polly and Morghana enjoying the fire!

But over time the ascites was increasing, with each drain being longer, with more fluid, and more taxing for Polly. She had decided that when it would get to the point that she would need two a week then that was enough and time to call it a day. Then, on a Thursday evening after we’d had an afternoon laying paving slabs for another little patio area, Polly was having a bath whilst I was researching some new symptoms that had appeared, and when Polly emerged I showed her what I had found. The symptoms matched what happens when that pressure that causes the ascites gets enough to push blood back down other pathways and as we read what else could lie in store Polly decided that was enough.

That night Polly emailed the oncologist to ask them to cancel her drain which was scheduled for the next day and instead to book her into the palliative ward. We then spent our final night together at home, before waking the next day to get the call to confirm that all was arranged from their end and that they would have a room by 10am, but to arrive when was good for us. Friends were informed and Polly and I headed off to the palliative ward, saying goodbye to the cats and leaving our house together for the very last time.

Arriving at the hospital we dropped in to see the oncology, radiology and front-desk staff we knew to chat with them before heading up to the palliative ward to meet the staff there and set up in the room. The oncologist visited and we had a good chat about what would happen with pain relief and sedation once Polly needed it. Shortly after our close friends Scott and Morghana arrived from different directions and I had brought Polly’s laptop and a 4G dongle and so on Skype arrived Polly’s good Skype pal Marisol joined us, virtually. We shared a (dairy free) Easter egg, some raspberry lemonade and even some saké! We had brought in a portable stereo and CD’s and danced and sang and generally made merry – which was so great.

After a while Polly decided that she was too uncomfortable and needed the pain relief and sedation, so everything was put in its place and we all said our goodbyes to Polly as she was determined to do the final stages on her own, and she didn’t want anyone around in case it caused her to try and hang on longer than she really should. I told her I was so proud of her and so honoured to be her husband for all this time. Then we left, as she wished, with Scott and Morghana coming back with me to the house. We had dinner together at the house and then Morghana left for home and Scott kindly stayed in the spare room.

The next day Scott and I returned to the hospital, Polly was still sleeping peacefully so after a while he and I had a late lunch together, making sure to fulfil Polly’s previous instructions to go enjoy something that she couldn’t, and then we went our separate ways. I had not been home long before I got the call from the hospital – Polly was starting to fade – so I contacted Scott and we both made our way back there again. The staff were lovely, they managed to rustle up some food for us as well as tea and coffee and would come and check on us in the waiting lounge, next door to where Polly was sleeping. At one point the nurse came in and said “you need a hug, she’s still sleeping”. Then, a while after, she came back in and said “I need a hug, she’s gone…”.

I was bereft. Whilst intellectually I knew this was inevitable, the reality of knowing that my life partner of 17 years was gone was so hard. The nurse told me us that we could see Polly now, and so Scott and I went to see her to say our final goodbye. She was so peaceful, and I was grateful that things had gone as she wanted and that she had been able to leave on her own terms and without the greater discomforts and pain that she was worried would still be coming. Polly had asked us to leave a CD on, and as we were leaving the nurses said to us “oh, we changed the CD earlier on today because it seemed strange to just have the one on all the time. We put this one on by someone called ‘Donna Williams’, it was really nice.”. So they had, unknowingly, put her own music on to play her out.

As you would expect if you had ever met Polly she had put all her affairs in order, including making preparations for her memorial as she wanted to make things as easy for me as possible. I arranged to have it live streamed for friends overseas and as part of that I got a recording of it, which I’m now making public below. Very sadly her niece Jacqueline, who talks at one point about going ice skating with her, has also since died.

Polly and I were so blessed to have 16 wonderful years together, and even at the end the fact that we did not treat death as a taboo and talked openly and frankly about everything (both as a couple and with friends) was such a boon for us. She made me such a better person and will always be part of my life, in so many ways.

Finally, I leave you with part of Polly’s poem & song “Still Awake”..

Time is a thief, which steals the chances that we never get to take.
It steals them while we are asleep.
Let’s make the most of it, while we are still awake.

Polly at Cardinia Reservoir, late evening

Submission to Joint Select Committee on Constitutional Recognition Relating to Aboriginal and Torres Strait Islander Peoples

Tonight I took some time to send a submission in to the Joint Select Committee on Constitutional Recognition Relating to Aboriginal and Torres Strait Islander Peoples in support of the Uluru Statement from the Heart from the 2017 First Nations National Constitutional Convention held at Uluru. Submissions close June 11th so I wanted to get this in as I feel very strongly about this issue.

Here’s what I wrote:

To the Joint Select Committee on Constitutional Recognition Relating to Aboriginal and Torres Strait Islander Peoples,

The first peoples of Australia have lived as part of this continent for many times longer than the ancestors of James Cook lived in the UK(*), let alone this brief period of European colonisation called Australia.

They have farmed, shaped and cared for this land over the millennia, they have seen the climate change, the shorelines move and species evolve.

Yet after all this deep time as custodians of this land they were dispossessed via the convenient lie of Terra Nullius and through killing, forced relocation and introduced sickness had their links to this land severely beaten, though not fatally broken.

Yet we still have the chance to try and make a bridge and a new relationship with these first peoples; they have offered us the opportunity for a Makarrata and I ask you to grasp this opportunity with both hands, for the sake of all Australians.

Several of the component states and territories of this recent nation of Australia are starting to investigate treaties with their first peoples, but this must also happen at the federal level as well.

Please take the Uluru Statement from the Heart to your own hearts, accept the offering of Makarrata & a commission and let us all move forward together.

Thank you for your attention.

Your sincerely,
Christopher Samuel

(*) Australia has been continuously occupied for at least 50,000 years, almost certainly for at least 60,000 years and likely longer. The UK has only been continuously occupied for around the last 10,000 years after the last Ice Age drove its previous population out into warmer parts of what is now Europe.

Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Vale Dad

[I’ve been very quiet here for over a year for reasons that will become apparent in the next few days when I finish and publish a long post I’ve been working on for a while – difficult to write, hence the delay]

It’s 10 years ago today that my Dad died, and Alan and I lost the father who had meant so much to both of us. It’s odd realising that it’s over 1/5th of my life since he died, it doesn’t seem that long.

Vale dad, love you…

Playing with Shifter Part 2 – converted Docker containers inside Slurm

This is continuing on from my previous blog about NERSC’s Shifter which lets you safely use Docker containers in an HPC environment.

Getting Shifter to work in Slurm is pretty easy, it includes a plugin that you must install and tell Slurm about. My test config was just:

required /usr/lib64/shifter/shifter_slurm.so shifter_config=/etc/shifter/udiRoot.conf

as I was installing by building RPMs (out preferred method is to install the plugin into our shared filesystem for the cluster so we don’t need to have it in the RAM disk of our diskless nodes). One that is done you can add the shifter programs arguments to your Slurm batch script and then just call shifter inside it to run a process, for instance:

#!/bin/bash

#SBATCH -p debug
#SBATCH --image=debian:wheezy

shifter cat /etc/issue

results in the following on our RHEL compute nodes:

[samuel@bruce Shifter]$ cat slurm-1734069.out 
Debian GNU/Linux 7 \n \l

simply demonstrating that it works. The advantage of using the plugin and this way of specifying the images is that the plugin will prep the container for us at the start of the batch job and keep it around until it ends so you can keep running commands in your script inside the container without the overhead of having to create/destroy it each time. If you need to run something in a different image you just pass the --image option to shifter and then it will need to set up & tear down that container, but the one you specified for your batch job is still there.

That’s great for single CPU jobs, but what about parallel applications? Well turns out that’s easy too – you just request the configuration you need and slap srun in front of the shifter command. You can even run MPI applications this way successfully. I grabbed the dispel4py/docker.openmpi Docker container with shifterimg pull dispel4py/docker.openmpi and tried its Python version of the MPI hello world program:

#!/bin/bash
#SBATCH -p debug
#SBATCH --image=dispel4py/docker.openmpi
#SBATCH --ntasks=3
#SBATCH --tasks-per-node=1

shifter cat /etc/issue

srun shifter python /home/tutorial/mpi4py_benchmarks/helloworld.py

This prints the MPI rank to demonstrate that the MPI wire up was successful and I forced it to run the tasks on separate nodes and print the hostnames to show it’s communicating over a network, not via shared memory on the same node. But the output bemused me a little:

[samuel@bruce Python]$ cat slurm-1734135.out
Ubuntu 14.04.4 LTS \n \l

libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
--------------------------------------------------------------------------
[[30199,2],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: bruce001

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
Hello, World! I am process 0 of 3 on bruce001.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
--------------------------------------------------------------------------
[[30199,2],1]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: bruce002

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
Hello, World! I am process 1 of 3 on bruce002.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
--------------------------------------------------------------------------
[[30199,2],2]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: bruce003

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
Hello, World! I am process 2 of 3 on bruce003.

It successfully demonstrates that it is using an Ubuntu container on 3 nodes, but the warnings are triggered because Open-MPI in Ubuntu is built with Infiniband support and it is detecting the presence of the IB cards on the host nodes. This is because Shifter is (as designed) exposing the systems /sys directory to the container. The problem is that this container doesn’t include the Mellanox user-space library needed to make use of the IB cards and so you get warnings that they aren’t working and that it will fall back to a different mechanism (in this case TCP/IP over gigabit Ethernet).

Open-MPI allows you to specify what transports to use, so adding one line to my batch script:

export OMPI_MCA_btl=tcp,self,sm

cleans up the output a lot:

Ubuntu 14.04.4 LTS \n \l

Hello, World! I am process 0 of 3 on bruce001.
Hello, World! I am process 2 of 3 on bruce003.
Hello, World! I am process 1 of 3 on bruce002.

This also begs the question then – what does this do for latency? The image contains a Python version of the OSU latency testing program which uses different message sizes between 2 MPI ranks to provide a histogram of performance. Running this over TCP/IP is trivial with the dispel4py/docker.openmpi container, but of course it’s lacking the Mellanox library I need and as the whole point of Shifter is security I can’t get root access inside the container to install the package. Fortunately the author of the dispel4py/docker.openmpi has their implementation published on Github and so I forked their repo, signed up for Docker and pushed a version which simply adds the libmlx4-1 package I needed.

Running the test over TCP/IP is simply a matter of submitting this batch script which forces it onto 2 separate nodes:

#!/bin/bash
#SBATCH -p debug
#SBATCH --image=chrissamuel/docker.openmpi:latest
#SBATCH --ntasks=2
#SBATCH --tasks-per-node=1

export OMPI_MCA_btl=tcp,self,sm

srun shifter python /home/tutorial/mpi4py_benchmarks/osu_latency.py

giving these latency results:

[samuel@bruce MPI]$ cat slurm-1734137.out
# MPI Latency Test
# Size [B]        Latency [us]
0                        16.19
1                        16.47
2                        16.48
4                        16.55
8                        16.61
16                       16.65
32                       16.80
64                       17.19
128                      17.90
256                      19.28
512                      22.04
1024                     27.36
2048                     64.47
4096                    117.28
8192                    120.06
16384                   145.21
32768                   215.76
65536                   465.22
131072                  926.08
262144                 1509.51
524288                 2563.54
1048576                5081.11
2097152                9604.10
4194304               18651.98

To run that same test over Infiniband I just modified the export in the batch script to force it to use IB (and thus fail if it couldn’t talk between the two nodes):

#!/bin/bash
#SBATCH -p debug
#SBATCH --image=chrissamuel/docker.openmpi:latest
#SBATCH --ntasks=2
#SBATCH --tasks-per-node=1

export OMPI_MCA_btl=openib,self,sm

srun shifter python /home/tutorial/mpi4py_benchmarks/osu_latency.py

which then gave these latency numbers:

[samuel@bruce MPI]$ cat slurm-1734138.out
# MPI Latency Test
# Size [B]        Latency [us]
0                         2.52
1                         2.71
2                         2.72
4                         2.72
8                         2.74
16                        2.76
32                        2.73
64                        2.90
128                       4.03
256                       4.23
512                       4.53
1024                      5.11
2048                      6.30
4096                      7.29
8192                      9.43
16384                    19.73
32768                    29.15
65536                    49.08
131072                   75.19
262144                  123.94
524288                  218.21
1048576                 565.15
2097152                 811.88
4194304                1619.22

So you can see that’s basically an order of magnitude improvement in latency using Infiniband compared to TCP/IP over gigabit Ethernet (which is what you’d expect).

Because there’s no virtualisation going on here there is no extra penalty to pay when doing this, no need to configure any fancy device pass through, no loss of any CPU MSR access, and so I’d argue that Shifter makes Docker containers way more useful for HPC than virtualisation or even Docker itself for the majority of use cases.

Am I excited about Shifter – yup! The potential to allow users build and application stack themselves right down to the OS libraries and (with a little careful thought) having something that could get native interconnect performance is fantastic. Throw in the complexities of dealing with conflicting dependencies between Python modules, system libraries, bioinformatics tools, etc, etc, and needing to provide simple methods for handling these and the advantages seem clear.

So the plan is to roll this out into production at VLSCI in the near future. Fingers crossed! 🙂

Playing with Shifter – NERSC’s tool to use Docker containers in HPC

Early days yet, but playing with NERSC’s Shifter to let us use Docker containers safely on our test RHEL6 cluster is looking really interesting (given you can’t use Docker itself under RHEL6, and if you could the security concerns would cancel it out anyway).

To use a pre-built Ubuntu Xenial image, for instance, you tell it to pull the image:

[samuel@bruce ~]$ shifterimg pull ubuntu:16.04

There’s a number of steps it goes through, first retrieving the container from the Docker Hub:

2016-08-01T18:19:57 Pulling Image: docker:ubuntu:16.04, status: PULLING

Then disarming the Docker container by removing any setuid/setgid bits, etc, and repacking as a Shifter image:

2016-08-01T18:20:41 Pulling Image: docker:ubuntu:16.04, status: CONVERSION

…and then it’s ready to go:

2016-08-01T18:21:04 Pulling Image: docker:ubuntu:16.04, status: READY

Using the image from the command line is pretty easy:

[samuel@bruce ~]$ cat /etc/lsb-release
LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch

[samuel@bruce ~]$ shifter --image=ubuntu:16.04 cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04 LTS"

and the shifter runtime will copy in a site specified /etc/passwd, /etc/group and /etc/nsswitch.conf files so that you can do user/group lookups easily, as well as map in site specified filesystems, so your home directory is just where it would normally be on the cluster.

[samuel@bruce ~]$ shifter --image=debian:wheezy bash --login
samuel@bruce:~$ pwd
/vlsci/VLSCI/samuel

I’ve not yet got to the point of configuring the Slurm plugin so you can queue up a Slurm job that will execute inside a Docker container, but very promising so far!

Correction: a misconception on my part – Shifter doesn’t put a Slurm batch job inside the container. It could, but there are good reasons why it’s better to leave that to the user (soon to be documented on the Shifter wiki page for Slurm integration).

Mount Burnett Observatory Open Day – 23rd February 2016 – noon until late!

If you’re around Melbourne, interested in astronomy and fancy visiting a community powered astronomical observatory that has a very active outreach and amateur astronomy focus then can I interest you in the Mount Burnett Observatory open day this Saturday (January 23rd) from noon onwards?

MBO Open Day flyer image

We’re going to have all sorts of things going on – talks, telescopes, radio astronomy, tours of the observatory dome (originally built by Monash University), lots of enthusiastic volunteers!

We’re fundraising to build a new accessible modern dome to complement the existing facilities so please come and help us out.

Let’s Encrypt – getting your own (free) SSL certificates

For those who’ve not been paying attention the Let’s Encrypt project entered public beta recently so that anyone could get their own SSL certificates. So I jumped right in with the simp_le client (as the standard client tries to configure Apache for you, and I didn’t want that as my config is pretty custom) and used this tutorial as inspiration.

My server is running Debian Squeeze LTS (for long painful reasons that I won’t go into here now) but the client installation was painless, I just patched out a warning about Python 2.6 no longer being supported in venv/lib/python2.6/site-packages/cryptography/__init__.py. 🙂

It worked well until I got rate limited for creating more than 10 certificates in a day (yeah, I host a number of domains).

Very happy with the outcome, A+ would buy again.. 🙂

Git: Renaming/swapping “master” with a branch on Github

I was playing around with some code and after having got it working I thought I’d make just one more little quick easy change to finish it off and found that I was descending a spiral of additional complexity due to the environment in which it had to work. As this was going to be “easy” I’d been pushing the commits to master on Github (I’m the only one using this code) and of course a few reworks in I’d realised that this was never going to work out well and needed to be abandoned.

So, how to fix this? The ideal situation would be to just disappear all the commits after the last good one, but that’s not really an option, so what I wanted was to create a branch from the last good point and then swap master and that branch over. Googling pointed me to some possibilities, including this “deprecated feedback” item from “githubtraining” which was a useful guide so I thought I should blog what worked for me in case it helps others.

  1. git checkout -b good $LAST_GOOD_COMMIT # This creates a new branch from the last good commit
  2. git branch -m master development # This renames the "master" branch to "development"
  3. git branch -m good master # This renames the "good" branch to "master".
  4. git push origin development # This pushes the "development" branch to Github
  5. In the Github web interface I went to my repos “Settings” on the right hand side (just above the “clone URL” part) and changed the default branch to “development“.
  6. git push origin :master # This deletes the "master" branch on Github
  7. git push --all # This pushes our new master branch (and everything else) to Github
  8. In the Github web interface I went and changed my default branch back to “master“.

…and that was it, not too bad!

You probably don’t want to do this if anyone else is using this repo though. 😉

Thoughts on the white spots of Ceres

If you’ve been paying attention to the world of planetary exploration you’ll have noticed the excitement about the unexpected white spots on the dwarf planet Ceres. Here’s an image from May 29th that shows them well.

Ceres with white spots

Having looked at a few images my theory is that impacts are exposing some much higher albedo material, which you can see here at the top of the rebound peak at the center of the crater, and that the impact has thrown some of this material up and that material has fallen back as Ceres has rotated slowly beneath it giving rise to the blobs to the side of the crater.

If my theory is right then if you know Ceres gravity and its rotational speed and the distance between the rebound peak and the other spots then you should be able to work out how far up the material was thrown up. That might tell you something about the size of the impact (depending on how much you know about the structure of Ceres itself).

As an analogy, here’s an impact on Mars captured by the HiRise camera on MRO that shows an area of ice exposed by an impact.