Calling Developers!
We are reenergizing our code contribution process! Learn More

What are the Slack Archives?

It’s a history of our time together in the Slack Community! There’s a ton of knowledge in here, so feel free to search through the archives for a possible answer to your question.

Because this space is not active, you won’t be able to create a new post or comment here. If you have a question or want to start a discussion about something, head over to our categories and pick one to post in! You can always refer back to a post from Slack Archives if needed; just copy the link to use it as a reference..

Hi, we'd be glad if anybody could help us regarding the Zed request routing which seems to have chan

Options
U013KSS3MM0
U013KSS3MM0 Posts: 14 🧑🏻‍🚀 - Cadet

Hi, we'd be glad if anybody could help us regarding the Zed request routing which seems to have changed in docker-sdk update 1.15.
Previously, there was an "rpc-server" component that allows to make internal Zed requests from Yves. Now this has changed with the new release so that the Zed endpoint (serving both backoffice and RPCs) defined in the deploy yaml file is used now as Zed upstream. This leads to Zed requests leaving internal networks when a public endpoint is defined.
What's the proper way to keep Zed requests internal?

Comments

  • sprymiker
    sprymiker Cloud Platform Architect Sprykee Posts: 781 🧑🏻‍🚀 - Cadet
    Options

    Hello Christian,

    It is very good question. Thank you for your interest.

    The changes was made to simplify the local environment and make it completely matching production environment we have in Cloud and would recommend to have. That also allows us to use exactly the same images locally and in production for frontend and applications. So dev, QA and CI environments are prod-like as much as can be.

    So we introduced gateway that plays load balancer role (manages SSL and proxying/balancing) and frontend server.

    1. We have a single frontend image that knows about all applications in the setup.
    2. In fact, if you properly set the environment up, RPC request does not leave private network as load balancer should be resolved by internal IP.
    3. We should use HTTPS even for calls in private network.
    4. We do not have to manage internal hostnames and SSL certificates for them. The domains structure is transparent for everyone.
    5. It’s operational and maintenance cost is less in Cloud as you have only 1 load balancer and one frontend group. In case of separate RPC server you would have 2 balancers, 2 frontend groups and additional network.
    6. We added a functionality to protect outside requests to RPC servers (look example below).
  • sprymiker
    sprymiker Cloud Platform Architect Sprykee Posts: 781 🧑🏻‍🚀 - Cadet
    Options

    That is the schema of what we have in Cloud and now in local docker environment.

  • sprymiker
    sprymiker Cloud Platform Architect Sprykee Posts: 781 🧑🏻‍🚀 - Cadet
    Options

    However nothing stops you to have this schema, using the same image for FE server and BE server. It’s up to you.

    Functionally it has no difference and I believe that is ok having simplest option in local environment.

  • sprymiker
    sprymiker Cloud Platform Architect Sprykee Posts: 781 🧑🏻‍🚀 - Cadet
    Options

    We recommend to limit public access to RPC Zed application using new feature via deploy.yml:

    x-gateway-auth: &gateway-auth
        <<: *real-ip
        auth:
            engine: whitelist
            include:
                - '${ALLOWED_IP}' # AWS gateway
    .....
    
                Zed:
                    application: zed
                    endpoints:
                        [gateway.mystore.com](http://gateway.mystore.com):
                            store: US
                            primal: true
                            <<: *gateway-auth
    
  • sprymiker
    sprymiker Cloud Platform Architect Sprykee Posts: 781 🧑🏻‍🚀 - Cadet
    Options

    Dear Christian,

    I hope my explanation is complete enough and the arguments have sense for you.

    If you still have concerns or see weak points or breaches, please, share them with us.

    Thank you again for your interest.

  • U013KSS3MM0
    U013KSS3MM0 Posts: 14 🧑🏻‍🚀 - Cadet
    Options

    Hello,

    Thanks a lot for your quick response and detailed + helpful answer! That's great.

    The arguments make totally sense to me and I like the idea in general. :) However, I do have some questions regarding the mentioned setup. Beforehand, let me give some context. We are operating an environment in AWS in a Kubernetes cluster (EKS) altogether with an ALB as ingress load balancer. Right now, we are using the routing concept of https://spryker.s3.eu-central-1.amazonaws.com/docs/Developer+Guide/Installation/Spryker+in+Docker/docker-local-environment-diagram.png (altogether with an asset S3 bucket).

    1) When using an internet-facing AWS Application Load Balancer to serve Yves, Glue and the Zed, we can't resolve the load balancer "internally" (in the local network) which is needed to avoid external requests over the internet (so that RPC requests can be made internally). Instead, all requests will be routed via a NAT gateway to the internet towards the ALB. How could a proper environment setup look like without an internal load balancer (as given in the second schema) -- if it's possible?
    2) How is the backoffice exposed publicly in the second schema? Or does this concept only represent the RPC calls?
    3) Can the Zed application offer multiple endpoints (one for the backoffice and one for RPCs)?
    4) What does the "primal" option mean in the deploy.yml? Unfortunately, I couldn't find a documentation for that.

    Hope you don't mind my stupid questions ;)

  • sprymiker
    sprymiker Cloud Platform Architect Sprykee Posts: 781 🧑🏻‍🚀 - Cadet
    edited July 2020
    Options

    Hello Christian,

    Here are my honest answers:

    1. I am not so strong in AWS stuff to be very deep in details. However I know that even internet-faced balancer has internal IP. So if you are able to get the internal IP you can create private DNS zone where your RPC domains are pointed to the internal IP. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-considerations.html#hosted-zone-private-considerations-public-private-overlapping. If it is not possible in your setup you can still have internal balancer and use the same frontend server or 2 servers based on same image, excluding RPC domains from public balancer.
    2. We always recommend to have BO to be accessible only via VPN. In this case you can use whitelist described in this thread. Also I want to emphasize that we are preparing marketplace features where BO must be accessible from outside for everyone. So in this case you would be able to vary security per application and/or domain.
    3. For sure. You can set multiple entrypoints for the same application or if you need you can have different apps for BO and RPC. It is good way for proper scaling: you scale RPC separately of BO. Or even scale differently per store (APPLICATION_STORE).
    4. primal is an official “workaround” that tells docker/sdk to use exactly this entrypoint for RPC calls for the store. The documentation for entire 1.15.0 release is in progress. And will be available within the upcoming product release (probably 1-2 weeks).
      P.S. docker/sdk just give the possibility to use frontend image built OOTB. So it is your call to use it or build your own FE configuration.
  • sprymiker
    sprymiker Cloud Platform Architect Sprykee Posts: 781 🧑🏻‍🚀 - Cadet
    edited July 2020
    Options

    Also one more note: You can always contribute directly into docker/sdk repo with new features and propositions. We do very appreciate contribution. 1 PR costs more than 1000 words. 🙂

    P.S. Just in case you would use docker/sdk frontend images and implement some cool features. 🚀

  • U013KSS3MM0
    U013KSS3MM0 Posts: 14 🧑🏻‍🚀 - Cadet
    Options

    Hello,

    1) That's a good starting point, we will dig into that topic. 🙂
    2) I totally agree on additional security. Our idea was to set up AWS Cognito for authentication embedded in the ALB.
    3) Perfect!
    4) Sounds good.

    We previously built customized nginx configuration, but want to adapt to native OOTB setups for easier maintenance. With all the information I received now, we can continue updating the docker-sdk release.

    Thank you very much for the assistance and helpful insights. Have a great weekend! 👍