Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

object: fix rgw ceph config (backport #9249) #9260

Closed
wants to merge 452 commits into from

Conversation

mergify[bot]
Copy link

@mergify mergify bot commented Nov 25, 2021

This is an automatic backport of pull request #9249 done by Mergify.
Cherry-pick of c92270c has failed:

On branch mergify/bp/release-1.7/pr-9249
Your branch is up to date with 'origin/release-1.7'.

You are currently cherry-picking commit c92270cd6.
  (all conflicts fixed: run "git cherry-pick --continue")
  (use "git cherry-pick --skip" to skip this patch)
  (use "git cherry-pick --abort" to cancel the cherry-pick operation)

nothing to commit, working tree clean

To fix up this pull request, you can check it out locally. See documentation: https://docs.github.com/en/github/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/checking-out-pull-requests-locally


Mergify commands and options

More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport <destination> will backport this PR on <destination> branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

subhamkrai and others added 30 commits September 13, 2021 16:43
it has happened more than once that the manifests do not
reflect the images so now reading from `pkg/operator/ceph/csi/spec.go`
as this what code uses.

Closes: #8687
Signed-off-by: subhamkrai <srai@redhat.com>
(cherry picked from commit 284dbc0)
With the nfs and cassandra providers moving to their own repo
we can simplify the docs a bit. Some obsolete documentation
is also removed.

Signed-off-by: Travis Nielsen <tnielsen@redhat.com>
(cherry picked from commit 6b7062e)
bot: Add more ceph commit prefixes (backport #8694)
docs: Refactor documentation for ceph provider (backport #8693)
build: use pkg/operator/ceph/csi/spec.go for csi images (backport #8699)
ceph: fix lvm osd db device check (backport #8267)
update validate_yaml method to dry-run yaml files only

Closes: #8707
Signed-off-by: subhamkrai <srai@redhat.com>
ci: update validate_yaml method
Add a section to the upgrade doc instructing users to (and how to)
migrate `CephRBDMirror` `peers` spec to individual `CephBlockPools`.
Adjust the pending release notes to refer to the upgrade section now,
and clean up a few references in related docs to make sure users don't
miss important documentation.

Signed-off-by: Blaine Gardner <blaine.gardner@redhat.com>
(cherry picked from commit 91ec1ac)
Remove legacy documentation for configuring RBD mirroring. While we
still support legacy mirroring configs, we want to encourage new users
to use the CephBlockPool configuration for mirroring.

Signed-off-by: Blaine Gardner <blaine.gardner@redhat.com>
(cherry picked from commit 7c51132)
docs: ceph: add peer spec migration to upgrade doc (backport #8435)
Rook checks for down OSDs by checking the `ReadyReplicas` count
in the OSD deployement. When an OSD pod goes into CBLO due to
disk failure, there is a delay before this `ReadyReplicas` count
becomes 0. The deplay is very small but may result in rook missing
OSD down event. As a result no blocking PDBs will be created and
only default PDB with `AllowedDisruptions` count as 0 is available.
This PR tries to solve this. The OSD pdb reconciler will be
reconciled again if `AllowedDisruptions` count in the main
PDB is 0.

Signed-off-by: Santosh Pillai <sapillai@redhat.com>
(cherry picked from commit 7480f6b)
ceph: reconcile osd pdb if allowed disruption is 0 (backport #8698)
The steps to configure the peers are detailed in the CephFilesystem
section. Only the CephFilesystem is holding the peer configuration, not
the CephFilesystemMirror which only controls the bootstrap of the
daemon.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit fd28497)
Let's force v16.2.5 since the CI is broken with 16.2.6. This gives us
time to continue to merge work and work on fixing deployments with
16.2.6 in parallel.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit ae291af)
ci: force a particular ceph version (backport #8746)
doc: fix cephfs-mirror documentation (backport #8732)
If the CephObjectStore health checker fails to be created, return a
reconcile failure so that the reconcile will be run again and Rook will
retry creating the health checker. This also means that Rook will not
list the CephObjectStore as ready if the health checker can't be
started.

Resolved backport conflicts in the below files:
  pkg/operator/ceph/object/controller.go
    - revert monitoring routine struct change from 1.7 to master
  pkg/operator/ceph/object/controller_test.go
    - use master branch's rearchitected test harness

Signed-off-by: Blaine Gardner <blaine.gardner@redhat.com>
(cherry picked from commit 5383ba2)
When private/public network have been defined in the Ceph rook cluster it is not
possible to configure properly the ip address of the liveness probe
for the manager.
Changes in the manager in Pacific introduced this regression.

This change replaces the http probe by a command probe, avoiding thus to
determine what is going to be the ip address of the manager before launching
the pod.

fixes: #8510

Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
(cherry picked from commit 7fbd9f2)
New version is out so let's use it.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 0c33493)
The new version v16.2.6 has a different behavior when it comes to the
number of cephfs-mirror socket files. Previous version had exactly 3 and
now has like way more...
So let's just check for the presence of more sockets.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit f040c37)
The MDS core team suggested with deploy the MDS daemon first and then do
the filesystem creation and configuration. Reversing the sequence lets
us avoid spurious FS_DOWN warnings when creating the filesystem.

Closes: #8745
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit c1a88f3)
ceph: use next ceph v16.2.6 pacific version (backport #8743)
ceph: do not use http for mgr liveness probe (backport #8721)
ceph: retry object health check if creation fails (backport #8708)
The integration tests must always be run against the local
build of rook, and an image should never be pulled from dockerhub.
To prevent pulling a release or master tag, the local build
will use a tag specific to the build and not ever published
elsewhere.

Signed-off-by: Travis Nielsen <tnielsen@redhat.com>
test: Run release branch tests against the correct local build version
There was a log line that informed that the object store status would
not be updated because the status was deleting erroneously. Move the
line to the correct position.

Signed-off-by: Blaine Gardner <blaine.gardner@redhat.com>
(cherry picked from commit c8b26e4)
rgw: fix misleading log line in rgw health checker (backport #8765)
As like RBD, CephFS provisioner pod need not to
run as privileged. as its not doing any operation
like plugin pods which does mounting and unmounting
removing the permissions for the same.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 95775fd)
mergify bot and others added 25 commits November 17, 2021 15:27
osd: add privileged support (back) to blkdevmapper securityContext (work-around) (backport #9191)
Removing `ceph` component from doc as component
`ceph` was removed from commitlint bot earlier.

Signed-off-by: subhamkrai <srai@redhat.com>
(cherry picked from commit 9519408)
docs: remove old commitlint component `ceph` from doc (backport #9190)
the mon cluster clusterInfo is intiated seprately,
and misses out to set the cluster name and use default name as testing
from AdminClusterInfo.

Part-of: #9159
Signed-off-by: parth-gr <paarora@redhat.com>
(cherry picked from commit aea2856)
mon: set cluster name to mon cluster (backport #9203)
Ceph has recently reported that it may be unsafe to upgrade clusters
from Nautilus/Octopus to Pacific v16.2.0 through v16.2.6. We are
tracking this in Rook issue #9185.
Add a warning to the upgrade doc about this.

Signed-off-by: Blaine Gardner <blaine.gardner@redhat.com>
(cherry picked from commit 1f87196)
docs: add OMAP quick fix warning to upgrade guide (backport #9187)
With the patch release the examples are updated to the version
v1.7.8

Signed-off-by: Travis Nielsen <tnielsen@redhat.com>
build: Set the release version to v1.7.8
By default HPA can use details about memory or CPU consumption for
autoscaling, but also it can use custom metrics as well. There are alot
provides supports HPA via customer and one of them is KEDA project. Here
it is done with help of Prometheus Scaler

Signed-off-by: Jiffin Tony Thottan <thottanjiffin@gmail.com>
(cherry picked from commit d32c837)
The templates for the mgr-generated ServiceMonitor and PrometheusRule objects included the labels
prometheus and team, making it impossible to override them as user.

This adds a new method `OverwriteApplyToObjectMeta` to pkg/apis/rook.io.Labels,
which, contrary to the existing `ApplyToObjectMeta` method, overwrites existing
labels.

Backport of PR #9180
Signed-off-by: Mara Sophie Grosch <littlefox@lf-net.org>
adding zonegroup and zone args to avoid failed reconcile

Signed-off-by: Olivier Bouffet <olivier.bouffet@infomaniak.com>
docs: add details about HPA via KEDA (backport #9202)
For CRD not using the new nfs spec that includes the pool settings,
applying the "size" property won't work since it is set to 0. The pool
still gets created but returns an error. The loop is re-queued but on
the second run the pool is detected so no further configuration is done.

Closes: #9205
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit c3accdc)
Previously, if the pool was present we would not run the pool creation
again. This is a problem if the pool spec changes, the new settings will
never be applied.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit f314284)
Updating the promethes reources (PrometheusRule and ServiceMonitor) is
done by fetching the current resource from the server and updating the
spec on it. This commit makes it also apply the labels, so users can
update them via rook CRDs.

Closes: #9241
Signed-off-by: Mara Sophie Grosch <littlefox@lf-net.org>
(cherry picked from commit 733878b)
object: fix search user in objectstore
monitoring: allow overriding monitoring labels
nfs: only set the pool size when it exists and always run default pool creation (backport #9224)
The MKNOD capability was missing and due to recent addition some pod now
only require this cap as well as privileged.
The cap must be explicitly exposed so it can be requested by a pod.

Closes: #9234
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit b38f430)
core: fix openshift security context (backport #9236)
monitoring: update label on prometheus resources (backport #9243)
use Zone and ZoneGroup instead of storename for rgw_zone and rgw_zonegroup

Signed-off-by: Olivier Bouffet <olivier.bouffet@infomaniak.com>
use Zone and ZoneGroup instead of storename for rgw_zone and rgw_zonegroup

Signed-off-by: Olivier Bouffet <olivier.bouffet@infomaniak.com>
(cherry picked from commit c92270c)
@mergify mergify bot added the conflicts label Nov 25, 2021
@leseb leseb changed the base branch from release-1.7 to master November 25, 2021 17:36
@leseb leseb closed this Nov 25, 2021
@leseb leseb deleted the mergify/bp/release-1.7/pr-9249 branch November 25, 2021 17:36
@mergify
Copy link
Author

mergify bot commented Nov 25, 2021

This pull request has merge conflicts that must be resolved before it can be merged. @mergify[bot] please rebase it. https://rook.io/docs/rook/latest/development-flow.html#updating-your-fork

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet