Add DeepSeek-R1 Model now Available in Amazon Bedrock Marketplace And Amazon SageMaker JumpStart

Damaris Matteson 2025-04-03 06:03:02 +08:00
commit bef8815bc8
1 changed files with 93 additions and 0 deletions

@ -0,0 +1,93 @@
<br>Today, we are thrilled to announce that DeepSeek R1 distilled Llama and Qwen models are available through Amazon Bedrock Marketplace and Amazon SageMaker JumpStart. With this launch, you can now release DeepSeek [AI](https://www.eticalavoro.it)'s first-generation frontier model, DeepSeek-R1, along with the distilled versions ranging from 1.5 to 70 billion criteria to build, experiment, and responsibly scale your generative [AI](https://fromkorea.kr) ideas on AWS.<br>
<br>In this post, we demonstrate how to get started with DeepSeek-R1 on Amazon Bedrock Marketplace and SageMaker JumpStart. You can follow similar steps to deploy the distilled versions of the models as well.<br>
<br>Overview of DeepSeek-R1<br>
<br>DeepSeek-R1 is a large language model (LLM) developed by DeepSeek [AI](https://amorweddfair.com) that utilizes reinforcement discovering to boost thinking capabilities through a multi-stage training procedure from a DeepSeek-V3-Base structure. A crucial identifying feature is its support knowing (RL) step, which was used to fine-tune the design's responses beyond the standard pre-training and tweak procedure. By integrating RL, DeepSeek-R1 can adjust better to user feedback and [kousokuwiki.org](http://kousokuwiki.org/wiki/%E5%88%A9%E7%94%A8%E8%80%85:Addie011903410) goals, ultimately improving both significance and clarity. In addition, DeepSeek-R1 uses a chain-of-thought (CoT) approach, indicating it's geared up to break down intricate inquiries and factor through them in a detailed manner. This guided reasoning process permits the model to produce more accurate, transparent, and detailed answers. This model integrates RL-based fine-tuning with CoT abilities, aiming to produce structured responses while [focusing](https://tangguifang.dreamhosters.com) on interpretability and user interaction. With its comprehensive capabilities DeepSeek-R1 has caught the industry's attention as a versatile text-generation model that can be incorporated into [numerous workflows](https://tv.sparktv.net) such as agents, rational reasoning and data interpretation tasks.<br>
<br>DeepSeek-R1 utilizes a Mixture of Experts (MoE) architecture and is 671 billion parameters in size. The MoE architecture allows activation of 37 billion parameters, allowing efficient reasoning by routing inquiries to the most appropriate expert "clusters." This approach allows the model to specialize in different problem [domains](https://menfucks.com) while maintaining general efficiency. DeepSeek-R1 needs at least 800 GB of [HBM memory](https://superappsocial.com) in FP8 format for inference. In this post, we will [utilize](https://dolphinplacements.com) an ml.p5e.48 xlarge circumstances to release the model. ml.p5e.48 xlarge includes 8 Nvidia H200 GPUs providing 1128 GB of GPU memory.<br>
<br>DeepSeek-R1 distilled designs bring the thinking capabilities of the main R1 design to more [effective architectures](http://nas.killf.info9966) based upon popular open models like Qwen (1.5 B, 7B, 14B, and 32B) and Llama (8B and 70B). Distillation refers to a process of training smaller sized, more efficient models to imitate the habits and [thinking patterns](https://gt.clarifylife.net) of the larger DeepSeek-R1 model, using it as a teacher model.<br>
<br>You can release DeepSeek-R1 model either through SageMaker JumpStart or Bedrock Marketplace. Because DeepSeek-R1 is an [emerging](http://103.254.32.77) model, we advise deploying this model with guardrails in location. In this blog, we will use Amazon Bedrock Guardrails to present safeguards, avoid hazardous material, and assess designs against key security requirements. At the time of writing this blog site, for DeepSeek-R1 implementations on SageMaker JumpStart and Bedrock Marketplace, Bedrock Guardrails supports only the ApplyGuardrail API. You can create multiple guardrails tailored to different use cases and apply them to the DeepSeek-R1 design, improving user experiences and standardizing safety controls throughout your generative [AI](https://unitenplay.ca) applications.<br>
<br>Prerequisites<br>
<br>To release the DeepSeek-R1 model, you need access to an ml.p5e circumstances. To inspect if you have quotas for P5e, open the Service Quotas console and under AWS Services, select Amazon SageMaker, and confirm you're utilizing ml.p5e.48 xlarge for endpoint usage. Make certain that you have at least one ml.P5e.48 xlarge instance in the AWS Region you are deploying. To request a limitation boost, produce a limitation increase request and [surgiteams.com](https://surgiteams.com/index.php/User:AshliLent31607) reach out to your [account team](http://stotep.com).<br>
<br>Because you will be releasing this design with Amazon Bedrock Guardrails, make certain you have the right AWS Identity and Gain Access To Management (IAM) consents to use Amazon Bedrock Guardrails. For guidelines, see Establish consents to use guardrails for content filtering.<br>
<br>Implementing guardrails with the ApplyGuardrail API<br>
<br>Amazon Bedrock Guardrails enables you to present safeguards, avoid harmful content, and examine models against key security criteria. You can carry out precaution for the DeepSeek-R1 design using the Amazon Bedrock [ApplyGuardrail](https://runningas.co.kr) API. This allows you to use guardrails to assess user inputs and design actions released on Amazon Bedrock Marketplace and SageMaker JumpStart. You can create a guardrail utilizing the Amazon Bedrock console or the API. For the example code to produce the guardrail, see the GitHub repo.<br>
<br>The general flow includes the following steps: First, the system receives an input for the design. This input is then processed through the ApplyGuardrail API. If the input passes the guardrail check, it's sent to the model for inference. After getting the design's output, another guardrail check is applied. If the output passes this final check, it's returned as the outcome. However, if either the input or output is intervened by the guardrail, a message is returned indicating the nature of the intervention and whether it happened at the input or output stage. The examples showcased in the following areas demonstrate reasoning using this API.<br>
<br>Deploy DeepSeek-R1 in Amazon Bedrock Marketplace<br>
<br>Amazon Bedrock Marketplace offers you access to over 100 popular, emerging, and specialized structure models (FMs) through Amazon Bedrock. To gain access to DeepSeek-R1 in Amazon Bedrock, total the following actions:<br>
<br>1. On the Amazon Bedrock console, select [Model brochure](http://203.171.20.943000) under Foundation designs in the navigation pane.
At the time of composing this post, you can utilize the InvokeModel API to invoke the design. It doesn't support Converse APIs and other Amazon Bedrock tooling.
2. Filter for DeepSeek as a supplier and pick the DeepSeek-R1 design.<br>
<br>The design detail page provides essential details about the model's abilities, prices structure, and implementation guidelines. You can discover detailed usage guidelines, consisting of sample API calls and code snippets for combination. The model supports various text generation jobs, including material creation, code generation, and question answering, utilizing its reinforcement finding out optimization and CoT reasoning abilities.
The page likewise consists of deployment choices and licensing details to assist you start with DeepSeek-R1 in your applications.
3. To start using DeepSeek-R1, pick Deploy.<br>
<br>You will be prompted to configure the release details for DeepSeek-R1. The model ID will be pre-populated.
4. For Endpoint name, get in an endpoint name (in between 1-50 alphanumeric characters).
5. For Number of instances, go into a variety of circumstances (between 1-100).
6. For example type, choose your instance type. For ideal performance with DeepSeek-R1, a GPU-based instance type like ml.p5e.48 xlarge is suggested.
Optionally, you can set up innovative security and facilities settings, consisting of virtual private cloud (VPC) networking, service function approvals, and encryption settings. For the majority of utilize cases, the default settings will work well. However, for [production](http://112.125.122.2143000) implementations, you might want to examine these settings to align with your organization's security and compliance requirements.
7. Choose Deploy to begin using the design.<br>
<br>When the deployment is total, you can evaluate DeepSeek-R1's capabilities straight in the Amazon Bedrock play area.
8. Choose Open in play ground to access an interactive user interface where you can explore various triggers and change model criteria like temperature level and optimum length.
When using R1 with Bedrock's InvokeModel and Playground Console, use DeepSeek's chat design template for ideal outcomes. For example, content for inference.<br>
<br>This is an to explore the design's thinking and text generation abilities before integrating it into your applications. The play area provides instant feedback, assisting you comprehend how the design responds to numerous inputs and letting you fine-tune your triggers for [optimal outcomes](https://yeetube.com).<br>
<br>You can quickly check the model in the play area through the UI. However, to [conjure](http://forum.infonzplus.net) up the [released design](https://hypmediagh.com) programmatically with any Amazon Bedrock APIs, you need to get the endpoint ARN.<br>
<br>Run reasoning utilizing guardrails with the deployed DeepSeek-R1 endpoint<br>
<br>The following code example shows how to perform inference utilizing a deployed DeepSeek-R1 design through Amazon Bedrock using the invoke_model and ApplyGuardrail API. You can produce a guardrail using the Amazon Bedrock console or the API. For the example code to develop the guardrail, see the GitHub repo. After you have actually produced the guardrail, utilize the following code to implement guardrails. The script initializes the bedrock_runtime customer, sets up reasoning criteria, and sends out a request to generate text based upon a user timely.<br>
<br>Deploy DeepSeek-R1 with SageMaker JumpStart<br>
<br>SageMaker JumpStart is an artificial intelligence (ML) center with FMs, integrated algorithms, and prebuilt ML options that you can deploy with just a few clicks. With SageMaker JumpStart, you can tailor pre-trained models to your use case, with your information, and release them into production utilizing either the UI or SDK.<br>
<br>Deploying DeepSeek-R1 design through SageMaker JumpStart uses 2 hassle-free techniques: using the intuitive SageMaker JumpStart UI or executing programmatically through the SageMaker Python SDK. Let's check out both approaches to assist you choose the technique that best fits your requirements.<br>
<br>Deploy DeepSeek-R1 through [SageMaker JumpStart](http://1.92.128.2003000) UI<br>
<br>Complete the following actions to deploy DeepSeek-R1 utilizing SageMaker JumpStart:<br>
<br>1. On the SageMaker console, select Studio in the navigation pane.
2. First-time users will be [triggered](https://jobspage.ca) to develop a domain.
3. On the SageMaker Studio console, [wiki.dulovic.tech](https://wiki.dulovic.tech/index.php/User:Alma22I28738) choose JumpStart in the navigation pane.<br>
<br>The model web browser shows available models, with details like the company name and design abilities.<br>
<br>4. Search for DeepSeek-R1 to see the DeepSeek-R1 design card.
Each design card shows essential details, consisting of:<br>
<br>- Model name
- Provider name
- Task classification (for instance, [oeclub.org](https://oeclub.org/index.php/User:AntonyW4389122) Text Generation).
Bedrock Ready badge (if relevant), showing that this design can be registered with Amazon Bedrock, allowing you to use Amazon Bedrock APIs to invoke the design<br>
<br>5. Choose the model card to see the model details page.<br>
<br>The model details page consists of the following details:<br>
<br>- The model name and service provider details.
Deploy button to deploy the design.
About and Notebooks tabs with detailed details<br>
<br>The About tab consists of essential details, such as:<br>
<br>- Model description.
- License details.
- Technical specs.
- Usage guidelines<br>
<br>Before you deploy the design, it's advised to examine the design details and license terms to verify compatibility with your use case.<br>
<br>6. Choose Deploy to proceed with deployment.<br>
<br>7. For Endpoint name, utilize the automatically produced name or produce a custom one.
8. For Instance type ¸ pick a circumstances type (default: ml.p5e.48 xlarge).
9. For Initial instance count, get in the variety of circumstances (default: 1).
Selecting proper circumstances types and counts is vital for cost and efficiency optimization. Monitor your release to adjust these settings as needed.Under Inference type, Real-time inference is chosen by default. This is optimized for sustained traffic and low latency.
10. Review all setups for precision. For this design, we highly suggest sticking to SageMaker JumpStart default settings and making certain that network isolation remains in location.
11. Choose Deploy to deploy the design.<br>
<br>The deployment process can take several minutes to finish.<br>
<br>When implementation is complete, your endpoint status will alter to InService. At this point, the design is all set to accept inference requests through the endpoint. You can keep track of the deployment development on the SageMaker console Endpoints page, which will display pertinent metrics and status details. When the implementation is total, you can conjure up the model using a [SageMaker runtime](http://112.124.19.388080) customer and incorporate it with your applications.<br>
<br>Deploy DeepSeek-R1 using the SageMaker Python SDK<br>
<br>To get going with DeepSeek-R1 using the SageMaker Python SDK, you will need to install the SageMaker Python SDK and make certain you have the essential AWS approvals and environment setup. The following is a detailed code example that demonstrates how to release and use DeepSeek-R1 for inference programmatically. The code for deploying the model is offered in the Github here. You can clone the notebook and run from SageMaker Studio.<br>
<br>You can run additional demands against the predictor:<br>
<br>Implement guardrails and run reasoning with your SageMaker JumpStart predictor<br>
<br>Similar to Amazon Bedrock, you can also utilize the ApplyGuardrail API with your SageMaker JumpStart predictor. You can produce a [guardrail utilizing](http://nas.killf.info9966) the Amazon Bedrock console or the API, and implement it as displayed in the following code:<br>
<br>Clean up<br>
<br>To avoid unwanted charges, finish the steps in this area to tidy up your resources.<br>
<br>Delete the Amazon Bedrock Marketplace deployment<br>
<br>If you released the design utilizing Amazon Bedrock Marketplace, complete the following steps:<br>
<br>1. On the Amazon Bedrock console, under Foundation models in the navigation pane, select Marketplace releases.
2. In the Managed releases area, find the endpoint you want to erase.
3. Select the endpoint, and on the Actions menu, choose Delete.
4. Verify the endpoint details to make certain you're deleting the correct implementation: 1. Endpoint name.
2. Model name.
3. Endpoint status<br>
<br>Delete the SageMaker JumpStart predictor<br>
<br>The SageMaker JumpStart model you released will sustain costs if you leave it [running](https://goodprice-tv.com). Use the following code to delete the endpoint if you wish to stop sustaining charges. For more details, see Delete Endpoints and Resources.<br>
<br>Conclusion<br>
<br>In this post, we explored how you can access and deploy the DeepSeek-R1 model using Bedrock Marketplace and SageMaker JumpStart. Visit SageMaker JumpStart in SageMaker Studio or Amazon Bedrock Marketplace now to get going. For [demo.qkseo.in](http://demo.qkseo.in/profile.php?id=1017659) more details, refer to Use Amazon Bedrock tooling with Amazon SageMaker JumpStart designs, SageMaker JumpStart pretrained designs, Amazon SageMaker JumpStart Foundation Models, Amazon Bedrock Marketplace, and Getting begun with Amazon SageMaker JumpStart.<br>
<br>About the Authors<br>
<br>[Vivek Gangasani](https://lidoo.com.br) is a Lead Specialist Solutions Architect for Inference at AWS. He assists emerging generative [AI](https://www.pakgovtnaukri.pk) business build innovative options utilizing AWS services and accelerated calculate. Currently, he is concentrated on developing methods for fine-tuning and optimizing the reasoning performance of big language designs. In his free time, Vivek takes pleasure in hiking, enjoying motion pictures, and trying different foods.<br>
<br>Niithiyn Vijeaswaran is a Generative [AI](http://47.101.131.235:3000) [Specialist Solutions](https://app.galaxiesunion.com) Architect with the Third-Party Model [Science](http://47.119.175.53000) team at AWS. His area of focus is AWS [AI](https://richonline.club) accelerators (AWS Neuron). He holds a Bachelor's degree in Computer Science and Bioinformatics.<br>
<br>Jonathan Evans is an Expert Solutions Architect dealing with generative [AI](https://www.valeriarp.com.tr) with the [Third-Party Model](https://www.myjobsghana.com) Science team at AWS.<br>
<br>Banu Nagasundaram leads item, engineering, and strategic partnerships for Amazon SageMaker JumpStart, [SageMaker's artificial](http://git.jihengcc.cn) intelligence and generative [AI](http://files.mfactory.org) hub. She is enthusiastic about developing options that help customers accelerate their [AI](https://idemnaposao.rs) journey and unlock business value.<br>