We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
如果支持的话,怎么实现的?如果不支持,采用主流init container 冷启动方式。 A: model serving均采用init container 方式
另见kfserving 模型实时更新的讨论(2020年4月提出, 被label为feature, 还是open状态): kserve/kserve#772
A:训练时模型保存要自个搞, 自行保存到pv/主机本地路径/s3等.
kf serving 支持storage 列表: gs/s3/azure/pv/本地路径/http https://github.com/kubeflow/kfserving/blob/master/python/kfserving/README.md#kfserving-server
另见讨论tf-operator 是否增加data/model 等目录字段的讨论, 最终结论是tf-operator不感知worker这些目录. kubeflow/trainer#224.
A: kubeflow/seldon serving 用的deployment; train 用的pod: tf-operator/mpi-operator 等
另见讨论tf-operator的实现细节, 最终采用pod directly, 最主要原因是flexibility/full control. kubeflow/trainer#45, kubeflow/trainer#325
所以建议pod-like 方案跟其保持一致: deployment for serving pod for training
istio: 10个pod(1.19版本), 待完成资源分析 谁依赖它: kfserving/seldon-core
knative serving: 7个pod, 待完成资源分析 谁依赖它: kfserving
kfserving: kubeflow: 25个pod, 待完成资源分析
KFServing依赖Knative和istio什么能力: https://github.com/kubeflow/kfserving#prerequisites 依赖 https://www.kubeflow.org/docs/components/istio/istio-in-kubeflow/#why-kubeflow-needs-istio -- knative serving(待深入), example: https://knative.dev/docs/serving/samples/hello-world/helloworld-python/index.html -- istio: -- 南向流量: .. 而 k8s Ingress 只适用于 HTTP 流量 istio in kubeflow : https://www.kubeflow.org/docs/components/istio/istio-in-kubeflow/#istio-in-kubeflow -- 东西向: 相比于kube-proxy, 转发的 pod 不能正常提供服务,自动尝试另一个pod; 对流量细粒度的控制,比如按照百分比划分流量到不同的应用版本. https://jimmysong.io/blog/service-mesh-the-microservices-in-post-kubernetes-era/
The text was updated successfully, but these errors were encountered:
No branches or pull requests
调研KFServing和Seldon-core是否具备模型动态下载并更新能力?
如果支持的话,怎么实现的?如果不支持,采用主流init container 冷启动方式。
A: model serving均采用init container 方式
另见kfserving 模型实时更新的讨论(2020年4月提出, 被label为feature, 还是open状态): kserve/kserve#772
调研kubeflow 训练模型存储方式, 怎么把模型共享到serving的?
A:训练时模型保存要自个搞, 自行保存到pv/主机本地路径/s3等.
kf serving 支持storage 列表: gs/s3/azure/pv/本地路径/http
https://github.com/kubeflow/kfserving/blob/master/python/kfserving/README.md#kfserving-server
另见讨论tf-operator 是否增加data/model 等目录字段的讨论, 最终结论是tf-operator不感知worker这些目录. kubeflow/trainer#224.
调研kubeflow的train/serving各采用deployment、job、pod哪种形式。
A: kubeflow/seldon serving 用的deployment;
train 用的pod: tf-operator/mpi-operator 等
另见讨论tf-operator的实现细节, 最终采用pod directly, 最主要原因是flexibility/full control.
kubeflow/trainer#45,
kubeflow/trainer#325
所以建议pod-like 方案跟其保持一致:
deployment for serving
pod for training
调研KFServing和Seldon-core管理面的资源占用。
istio: 10个pod(1.19版本), 待完成资源分析
谁依赖它: kfserving/seldon-core
knative serving: 7个pod, 待完成资源分析
谁依赖它: kfserving
kfserving:
kubeflow: 25个pod, 待完成资源分析
调研KFServing依赖Knative和istio什么能力, seldon-core依赖istio什么能力。两者是否可以部署到KubeEdge?现在不行的话, kubeedge支持EdgeMesh后是否可能支持。
KFServing依赖Knative和istio什么能力: https://github.com/kubeflow/kfserving#prerequisites
依赖 https://www.kubeflow.org/docs/components/istio/istio-in-kubeflow/#why-kubeflow-needs-istio
-- knative serving(待深入), example: https://knative.dev/docs/serving/samples/hello-world/helloworld-python/index.html
-- istio:
-- 南向流量: .. 而 k8s Ingress 只适用于 HTTP 流量
istio in kubeflow : https://www.kubeflow.org/docs/components/istio/istio-in-kubeflow/#istio-in-kubeflow
-- 东西向: 相比于kube-proxy, 转发的 pod 不能正常提供服务,自动尝试另一个pod;
对流量细粒度的控制,比如按照百分比划分流量到不同的应用版本. https://jimmysong.io/blog/service-mesh-the-microservices-in-post-kubernetes-era/
The text was updated successfully, but these errors were encountered: