Skip to content

2024.03.28 Meeting Notes

Philipp Grete edited this page Apr 11, 2024 · 3 revisions

Agenda

Individual updates

JM

  • node/edge/face centered hdf5 IO now almost done (for real!)
  • PR ready for testing. Edge and face centered fields work in parthenon-mhd
  • Still sth off with xdmf
  • Final works on phdf (including cleanup)
  • Upcoming new example app for vector advection (to also allow regression test of non-cell centered fields)

LR

  • Fixed bug in ownership model for AMR for non-cell centered fields. Already merged.
  • Finished Forrest of Tree PR. Ready for final review.
  • Additional cleanup PR in the pipeline including MG fixes
  • Merge Forrest first, then smaller one to follow
  • Up next: more complex base grids

BR

  • working on KHARMA release release (still based on a frankenbranch of Parthenon with some backported features)
  • working on yt based viz
    • need support for transformed coords
    • support for vectors
    • will open PR once the main parthenon-frontend is merged. PG will follow up.
  • looking into TOML parsers for input files

MG

  • working on downstream on Phoebus

PB

  • wrote Spack recipe -- will upstream

PM

  • PR for new version of SwarmPacks (packing particles over blocks)
  • New complexity: packing of different types
    • mixed packs currently not supported
  • Request: people please look at the interface and comment
    • currently passing in vector of string and type based indices
  • Under the hood some cleanup/code dedup could happen (as some code is derived from SparsePacks)
  • Otherwise code ready for testing

PG

  • Refactor RestartReader (now abstract base class with no hdf5 dependency)
  • Looking into Cuda IPC bug

Future of pmb->par-for

  • https://github.com/parthenon-hpc-lab/parthenon/issues/1034
  • Generic functionality is preferable (as it causes less confusion)
  • For now, add shortened global interface
  • Eventually deprecate the pmb->par_for
  • Potentially introduce a md version (again as generic function that takes an md object) in the future

On the use of "one-sided MPI"

  • we've been called out twice now for mentioning "one-sided MPI", which is technically not 100% accurate from the MPI point of view
  • JM will update the README anyway and fix the wording along the way

Review non-WIP PRs

Clone this wiki locally