Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebase 1.6.1 #13653

Merged
merged 153 commits into from
Apr 27, 2017
Merged

Rebase 1.6.1 #13653

merged 153 commits into from
Apr 27, 2017

Conversation

sttts
Copy link
Contributor

@sttts sttts commented Apr 6, 2017

Bump kubernetes to 1.6.1 and master from Apr 4.

Replaces #13483
Fixes #13419
Fixes #13722
Fixes #13571
xref #12458

1.6 post-rebase tasks: #13732

TODOs:

  • check Status apiVersion in build webhook endpoint, compare Rebase 1.6.1 #13653 (comment): PR Build instead of Status in build webhook sttts/origin#75
  • make sure storage.k8s.io objects remain encoded in etcd as v1beta1
  • HPA doesn't exist in extensions/v1beta1 any more. Make sure we don't break existing HPAs in etcd encoded as extensions/v1beta1
  • figure out whether this is healthy: W0413 17:32:29.303436 41108 authentication.go:362] AnonymousAuth is not allowed with the AllowAll authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
  • @deads2k's comments (Rebase 1.6.1 #13653 (comment)):
    • REVIEW: adapt: remove OutputVersion helper calls in pkg/cmd/cli/cmd - I think it fails on multiple groups
    • REVIEW: update: pkg/cmd/server/kubernetes/master TestCMServerDefaults - get confirmation from @derekwaynecarrpost-rebase
    • REVIEW: update: etcd_storage_path_test.go, with unfinished etcd3 support - get review from @enj possibly post-rebase /cc @enj
    • "Don't give /status access to the namespaced roles." Rebase 1.6.1 #13653 (comment)
  • Review "REVIEW: adapt: pkg/cmd/util/clientcmd" because the logic completely changed from rebase-1.6.1-base to rebase-1.6.1-2-base @ncdc
    10f939a#r112429397
  • Semantic equality in deploy: sttts@766b1de#commitcomment-21821853

TODOs for the next rebase to origin master:

Broken extended tests (run on local Fedora 25 as AWS CI is down)

https://docs.google.com/a/redhat.com/spreadsheets/d/1OXZtuzmU86jxkN2qe7YuOCFXNDPtg1oEVD-CHIZI6u0/edit?usp=sharing

@sttts sttts mentioned this pull request Apr 6, 2017
2 tasks
@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 6, 2017
@liggitt
Copy link
Contributor

liggitt commented Apr 6, 2017

@@ -31,4 +31,4 @@ REQUIRED_BINS=(
"./..."
)

"${GODEP}" save -t "${REQUIRED_BINS[@]}"
GOPATH=$GOPATH:$GOPATH/src/k8s.io/kubernetes/staging "${GODEP}" save -t "${REQUIRED_BINS[@]}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are other tools going to need this $GOPATH addition?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only godep. We create symlinks in vendor/ to the staging dirs. So other tools like go-build have no problems.

This symlink business might go away when we extend our tooling to handle upstream commits over multiple repos. But as that is not trivial, we try to postpone that.


# godep fails to copy all package in staging because it gets confused with the symlinks.
# Hence, we copy over manually until we have proper staging repo tooling.
rsync -avx --include='*.go' --include='*/' --exclude='*' $GOPATH/src/k8s.io/kubernetes/staging/src/* vendor/k8s.io/kubernetes/staging/src/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need a --delete in case changes later remove a file here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We remove vendor/ before godep-save. So this is fine.

echo "s/$NEWREV/$OLDREV/" >> "$TMPGOPATH/undo.sed"
}

undo::forks::in::godep::json () {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use :: to denote namespace-y things ... just use _ or - here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Namespaces in bash... :')

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was about to write the same comment a few lines above. I totally agree with Steve here. I'd suggest the prefix to be godep::save, which would give you godep::save::undo_forks_in_godep_json here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's be consistent with that and don't make our Bash guru said 😉

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, using -.

@sttts
Copy link
Contributor Author

sttts commented Apr 11, 2017

[test]

@soltysh
Copy link
Member

soltysh commented Apr 11, 2017

[test]

hack/build-go.sh Outdated
@@ -7,6 +7,10 @@ source "$(dirname "${BASH_SOURCE}")/lib/init.sh"
build_targets=("$@")
platform="$(os::build::host_platform)"

# Set build tags for these binaries
readonly OS_GOFLAGS_TAGS="include_gcs include_oss"
readonly OS_GOFLAGS_TAGS_$(os::build::platform_arch)="gssapi"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smarterclayton why did these get put into build-cross only in current master?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The second line adds the build requirement that gssapi devel packages (used with CGo) are installed. Not sure we want that in general, maybe only for releases?

@@ -102,6 +100,7 @@ func TestKubeletDefaults(t *testing.T) {
RegistryPullQPS: 5.0,
ResolverConfig: kubetypes.ResolvConfDefault,
KubeletCgroups: "",
CgroupsPerQOS: true,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@derekwaynecarr @sjenning can you please review the new node config in the 1.6 rebase here?

We are seeing this on the CI AWS machines when running hack/end-to-end-docker.sh (in contrast, it's fine on a Fedora 25 machine):

I0411 12:32:33.862520    1647 kubelet.go:1757] skipping pod synchronization - [Failed to start ContainerManager failed to initialise top level QOS containers: root container /kubepods doesn't exist]

Do we miss any other settings?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sttts What OS type/version and Docker version are the CI machines running?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# uname -a
Linux ip-172-18-13-151.ec2.internal 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 9 10:09:10 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
# rpm -q docker
docker-1.12.6-14.el7.x86_64
# systemctl cat docker
# /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
Wants=docker-storage-setup.service
Requires=rhel-push-plugin.socket
Requires=docker-cleanup.timer

[Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --authorization-plugin=rhel-push-plugin \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=10min
Restart=on-abnormal
MountFlags=slave

[Install]
WantedBy=multi-user.target
# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.6
Storage Driver: devicemapper
 Pool Name: docker-docker--pool
 Pool Blocksize: 524.3 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: 
 Metadata file: 
 Data Space Used: 19.92 MB
 Data Space Total: 7.457 GB
 Data Space Available: 7.438 GB
 Metadata Space Used: 40.96 kB
 Metadata Space Total: 29.36 MB
 Metadata Space Available: 29.32 MB
 Thin Pool Minimum Free Space: 745.5 MB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Library Version: 1.02.135-RHEL7 (2016-11-16)
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge null host overlay
 Authorization: rhel-push-plugin
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Security Options: seccomp selinux
Kernel Version: 3.10.0-327.22.2.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 2
CPUs: 4
Total Memory: 15.26 GiB
Name: ip-172-18-13-151.ec2.internal
ID: 4GNM:G4KZ:FPMN:DQSK:ZWXO:WPQQ:TXXT:S432:5H3B:Z7PH:3WCO:NAUM
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
 ci.dev.openshift.redhat.com:5000
 172.30.0.0/16
 127.0.0.0/8
Registries: docker.io (secure)

@@ -129,7 +129,7 @@ os::cmd::expect_success 'oc env -n default dc/docker-registry REGISTRY_MIDDLEWAR
os::cmd::expect_success 'oc rollout status dc/docker-registry'
os::log::info "Restore configured to enable mirroring"

registry_pod="$(oc get pod -n default -l deploymentconfig=docker-registry --show-all=false --template='{{(index .items 0).metadata.name}}')"
registry_pod="$(oc get pod -n default -l deploymentconfig=docker-registry --template '{{range .items}}{{if not .metadata.deletionTimestamp}}{{.metadata.name}}{{end}}{{end}}')"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there's more than one pod here it's going to smash the names together without spaces -- also, what has changed to lead to this happening?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't know yet. It's just taking ~10 seconds on my laptop for the old pod to go from Terminating to 100% deleted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rollout does not synchronize with the deletion. I think excluding pods which are in deletion state is correct.

We shouldn't have more than one active pod if our instance=1 deployment works. If not, the test breaks. We could add a space and select the first pod, but that would hide a problem.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the behavior change here is the terminated --> deleted transition is much slower and this causes the get here to fail... agree that excluding terminated pods here is reasonable, as long as we're not plastering over a performance/behavior regression with this

for _, fqKind := range fqKinds {
if unversioned && fqKind.Group == "" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you particular on the value of the group? Why isn't meta.k8s.io valid?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metav1.Status is only registered as unversioned into the core v1 group. Our build apigroup webhook storage returns Status in the New() method (

func (h *WebHook) New() runtime.Object {
). This code here checks on api installation time that the registered group matches to the storage being installed.

Hence, we have two choice: 1) we can register Status under the build apigroup (sttts@adc5280) or 2) make the api installer happy with unversioned kinds as well.

We chose (2) here to avoid side-effects from Status being registered under another apigroup.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hence, we have two choice: 1) we can register Status under the build apigroup (sttts/origin@adc5280) or 2) make the api installer happy with unversioned kinds as well.

Option 1 seems more correct. I haven't fully considered the repercussions, but perhaps a followup issue is in order.

Does the serialized return include an apiVersion? Do we know what it is? meta.k8s.io or build.openshift.io seem good, but "" seems less good.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We haven't checked the returned version, but we should. Taking a look.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added a todo in the description above.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I start to think that the Status type is actually wrong, compare

func (w *WebHook) ServeHTTP(writer http.ResponseWriter, req *http.Request, ctx apirequest.Context, name, subpath string) error {
. We return Build if I read that code correctly. This also matches the integration tests:
err = json.Unmarshal(body, returnedBuild)

@smarterclayton you wrote this ^^

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is a PR: sttts#75

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merged sttts#75. It reverts this patch in APIInstaller. getResourceKind and replaces the Status type by Build in the webhook storage.

@@ -141,7 +141,7 @@ func resolveContainerSecurityContext(provider kscc.SecurityContextConstraintsPro
// since that is how the sc provider will eventually apply settings in the runtime.
// This results in an SC that is based on the Pod's PSC with the set fields from the container
// overriding pod level settings.
container.SecurityContext = sc.DetermineEffectiveSecurityContext(pod, container)
container.SecurityContext = sc.InternalDetermineEffectiveSecurityContext(pod, container)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is worth a followup issue. Upstream is actively trying to remove this method, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added to #13732

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I took the quick way out and remained on internal for now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can get rid of InternalDetermineEffectiveSecurityContext using some conversion. I would leave it as it is until upstream really removes it. No need to rush here IMO.

var outputVersion schema.GroupVersion
outputVersionString := kcmdutil.GetFlagString(cmd, "output-version")
if len(outputVersionString) == 0 {
outputVersion = *clientConfig.GroupVersion
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this going to be incorrect every time that you're getting resources from multiple groups. oc export builds,deployments. I really think we want to move away from choosing the --output-version like upstream and using use oc convert instead. We can make that one spot correct with a preferred version chain.

I have no suggestion that I think could make this work properly for the scenario I described.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what you're suggesting we do here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deprecate output version, select the version you want via your input (oc export jobs.v1.batch,deployments.v1beta1.apps, etc)

can we make export output whatever version you requested?

echo "s/$NEWREV/$OLDREV/" >> "$TMPGOPATH/undo.sed"
}

undo::forks::in::godep::json () {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was about to write the same comment a few lines above. I totally agree with Steve here. I'd suggest the prefix to be godep::save, which would give you godep::save::undo_forks_in_godep_json here.

echo "s/$NEWREV/$OLDREV/" >> "$TMPGOPATH/undo.sed"
}

undo::forks::in::godep::json () {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's be consistent with that and don't make our Bash guru said 😉

@@ -3301,64 +3506,60 @@
},
{
"ImportPath": "google.golang.org/grpc",
"Comment": "v1.0.4",
"Rev": "777daa17ff9b5daef1cfdf915088a2ada3332bf0"
"Comment": "v1.0.2",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we are taking an older version of grpc?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see there's another commit which vendors grpc inside coreos/etcd. Why we would bother with those multilevel Godeps instead of having one, which is verified. I think none of our tools that check Godeps looks into the nested one, and with the proliferation of those (I know at least one another doing this) we should be.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generated code is not compatible. Upstream kube uses 1.0.2. etcd uses 1.0.4.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The options are vendor inside coreos/etcd or downgrade etcd to the same version kube is using.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, looking at the current commitchecker implementation I think it does check all of Godeps.json.

}

// We don't show the deployment history when running `oc rollback --dry-run`.
if d.config == nil && !isNotDeployed {
sorted := deploymentsHistory
var sorted []*v1.ReplicationController
// TODO(rebase-1.6): we should really convert the describer to use a versioned clientset
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have another post-rebase issue wrt to that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added to #13732

@@ -151,15 +151,15 @@ func TestValidatePodSecurityPolicyReview(t *testing.T) {
},
},
},
`spec.serviceAccountNames[0]: Invalid value: "my bad sa": must match the regex [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)* (e.g. 'example.com')`: {
`spec.serviceAccountNames[0]: Invalid value: "my bad sa": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')`: {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could squash all the changes fixing this test string into a single commit. At least fix-test: pkg/image/api/validation/validation_test.go and fix-test: pkg/api. I'd also squash fix-test: pkg/deploy/api/validation/validation_test.go error output with it as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do in the next rebase to origin master.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -155,7 +159,7 @@ func GetBootstrapClusterRoles() []authorizationapi.ClusterRole {
"replicationcontrollers/status", "resourcequotas", "resourcequotas/status", "securitycontextconstraints", "serviceaccounts", "services",
"services/status").RuleOrDie(),

authorizationapi.NewRule(read...).Groups(appsGroup).Resources("statefulsets", "statefulsets/status").RuleOrDie(),
authorizationapi.NewRule(read...).Groups(appsGroup).Resources("statefulsets", "statefulsets/status", "deployments", "deployments/scale", "deployments/status").RuleOrDie(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. You need similar changes in AdminRoleName, EditRoleName, ViewRoleName wrt do adding deployments to apps.
  2. We should drop jobs and horizontalpodautoscalers from extensions, everywhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ncdc ^^

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need similar changes in AdminRoleName, EditRoleName, ViewRoleName wrt do adding deployments to apps.

Don't give /status access to the namespaced roles.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll fix jobs and hpas. I'm going to make cleaning up /status a post-rebase follow-up, as it's currently there for the namespaced roles.

@sttts sttts mentioned this pull request Apr 12, 2017
32 tasks
@deads2k
Copy link
Contributor

deads2k commented Apr 12, 2017

Commits that lgtm (I'll keep updating)

  1. REVIEW: UPSTREAM: 00000: k8s.io/apiserver: accept unversioned core gr - with followup - Fixed.
  2. REVIEW: use version that handles internal api objects to determine ef - with followup
  3. REVIEW: adapt: wrap controller manager AddFlags with our controller list
  4. REVIEW: adapt: pkg/cmd/cli/cmd/set printer changes
  5. REVIEW: update: known kube apigroup versions in pkg/cmd/server/api
  6. REVIEW: update: pkg/cmd/server/kubernetes/master TestAPIServerDefault
  7. REVIEW: refactoring: merge storage factory code paths
  8. REVIEW: adapt: convert proxy & dns to shared informers (we had to do this @now?)
  9. REVIEW: update: cluster-reader permissions
  10. REVIEW: fix: write versioned AdmissionConfiguration for admission plu - definitely leave this commit as its own thing.
  11. SQUASH: REVIEW: initial 1.6 port of master setup
  12. REVIEW: make sure we convert to v1 for List in PrintResourceInfos
  13. REVIEW: initial 1.6 port of master setup - The storage setup concerns me. Fixed.

Commits that I don't like, but can't think of a small fix for

  1. REVIEW: adapt: remove OutputVersion helper calls in pkg/cmd/cli/cmd - I think it fails on multiple groups

Commits with substantitive comments

  1. REVIEW: update: pkg/cmd/server/kubernetes/master TestCMServerDefaults - get confirmation from @derekwaynecarr
  2. REVIEW: update: etcd_storage_path_test.go, with unfinished etcd3 support - get review from @enj

return groupVersionResources, nil
}
namespaceController := namespacecontroller.NewNamespaceController(kubeClient, clientPool, gvrFn, c.ControllerManager.NamespaceSyncPeriod.Duration, kapi.FinalizerKubernetes)
namespaceController := namespacecontroller.NewNamespaceController(kubeClient, clientPool, gvrFn, namespaceInformer, c.ControllerManager.NamespaceSyncPeriod.Duration, kapiv1.FinalizerKubernetes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not c.Informers.KubernetesInformers().Core().V1().Namespaces()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ncdc ^^

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Brain fart?

@@ -676,7 +681,7 @@ func newAuthenticator(config configapi.MasterConfig, restOptionsGetter restoptio
authenticators = append(authenticators, certauth)
}

resultingAuthenticator := union.NewFailOnError(authenticators...)
resultingAuthenticator := union.New(authenticators...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks incorrect. We should still be NewFailOnError, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fixed in a follow-up from yesterday.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

topLevelAuthenticators = append(topLevelAuthenticators, anonymous.NewAuthenticator())

return group.NewAuthenticatedGroupAdder(union.NewFailOnError(topLevelAuthenticators...)), nil
return group.NewAuthenticatedGroupAdder(union.New(topLevelAuthenticators...)), nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NewFailOnError

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed as well: fb66c26

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

etcdConfig.CertFile = masterConfig.EtcdClientInfo.ClientCert.CertFile
etcdConfig.CAFile = masterConfig.EtcdClientInfo.CA

storageFactory, err := kubeapiserver.NewStorageFactory(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I recall it, this is the one that does the funny parsing of the options. I don't think that we want to respect the kube-arg for dealing with this. Seems like we should respect our config instead and use NewDefaultStorageFactory

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean you want to drop everything server.APIEnablement.RuntimeConfig does?

Then we end up with this: sttts@f57955d

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

if err != nil {
return nil, fmt.Errorf("error determining service IP ranges: %v", err)
}
if err := s.SecureServing.MaybeDefaultWithSelfSignedCerts(s.GenericServerRunOptions.AdvertiseAddress.String(), apiServerServiceIP); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't have to do this. By the time we're called, certs should be available.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Informers: informers,
}

return kmaster, nil
}

func ClientCARegistrationHook(options *configapi.MasterConfig) (*master.ClientCARegistrationHook, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't get this for free?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, hardcoded in the master. As long as we do not decompose or just plainly use master.BuildMasterConfig, we have to reimplement it: sttts@c12bd72

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

ReconcilerSyncLoopPeriod: metav1.Duration{Duration: 5 * time.Second},
ReconcilerSyncLoopPeriod: metav1.Duration{Duration: 60 * time.Second},
Controllers: []string{"*"},
EnableTaintManager: true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@derekwaynecarr taints on or off?

// Location is the path to a configuration file that contains the plugin's
// configuration
// Location is the path to a legacy configuration file that contains the plugin's
// configuration. DEPRECATED.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deprecated? Because kube now lets you split bits apart?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least legacy in kube:

// convert the legacy format to the new admission control format
. Probably, here in the origin config neither "legacy" nor "DEPRECATED" makes much sense. Will remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sttts
Copy link
Contributor Author

sttts commented Apr 27, 2017

[test]
[testextended][extended:all]

1 similar comment
@deads2k
Copy link
Contributor

deads2k commented Apr 27, 2017

[test]
[testextended][extended:all]

@deads2k
Copy link
Contributor

deads2k commented Apr 27, 2017

[test]
[testextended][extended:all]

1 similar comment
@deads2k
Copy link
Contributor

deads2k commented Apr 27, 2017

[test]
[testextended][extended:all]

@openshift-bot
Copy link
Contributor

Evaluated for origin testextended up to bd5ed9c

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/testextended Running (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_extended/256/) (Base Commit: decce00) (Extended Tests: all)

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to bd5ed9c

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test Running (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin/988/) (Base Commit: decce00)

@deads2k
Copy link
Contributor

deads2k commented Apr 27, 2017

lgtm. I plan to merge on green.

@soltysh
Copy link
Member

soltysh commented Apr 27, 2017

LGTM, 👍 on merge when green

@0xmichalis
Copy link
Contributor

0xmichalis commented Apr 27, 2017 via email

@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 27, 2017
@deads2k deads2k merged commit 3659383 into openshift:master Apr 27, 2017
@deads2k
Copy link
Contributor

deads2k commented Apr 27, 2017

Two e2e flakes, known upstream. We should skip those for now.

One OpenShift flake, we should watch prevalence.

One bad unit test check. This needs to be fixed asap.

@stevekuznetsov
Copy link
Contributor

stevekuznetsov commented Apr 27, 2017

Do we have issues for those three items?

@sttts
Copy link
Contributor Author

sttts commented Apr 27, 2017

@stevekuznetsov the unit test fix follows in a minute. The e2e flakes are listed in the follow-up task issue #13732.

@sttts
Copy link
Contributor Author

sttts commented Apr 27, 2017

I will collect all open comments from this PR tomorrow and add them to #13732 or create individual issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet