DataEng Uncomplicated
DataEng Uncomplicated
  • Видео 101
  • Просмотров 933 153
Chatting with Data Using Gen AI - AWS Services Explained
This video talks about the different AWS Services that you can use to ask questions of your data that use Generative AI. This video will briefly discuss what each service does and guide you to select which is the right one for you depending on your use case.
DoubleCloud Sponsor Link - double.cloud/services/managed-airflow/?DataEngUncomplicated&DEU-YT
#awsservices #dataengineering #genai #amazonbedrock #amazonquizanswertrick
Buy me a Coffee - www.buymeacoffee.com/dataengu
Просмотров: 411

Видео

Deploying a Glue Job to AWS with Terraform: A Step-by-Step Tutorial
Просмотров 3,3 тыс.4 месяца назад
This video is a guide on how to deploy an AWS Glue Pyspark Job using Terraform. It covers the IAM role required to run your glue job, configuring a glue job in Terraform, and how to add variables that can be different depending on your aws environment. Github Repo: github.com/AdrianoNicolucci/dataenguncomplicated/tree/main/terraform_examples/deploy_glue_job From this Video's Sponsor: Get one mo...
Develop AWS Glue Jobs Locally Using Visual Studio Code and Docker on Windows - step by step
Просмотров 7 тыс.7 месяцев назад
This Video is a step-by-step tutorial on configuring your Windows computer to work with Visual Studio Code (VS Code) and Docker to run AWS Glue Jobs. This video will walk through how to configure AWS Glue 4.0. Buy me a Coffee - www.buymeacoffee.com/dataengu DoubleCloud Sponsor Link - double.cloud/?DataEng& Tutorial Links: AWS Documentation - aws.amazon.com/blogs/big-data/develop-and-test-aws-gl...
Mastering AWS Glue Unit Testing for PySpark Jobs with Pytest
Просмотров 4,3 тыс.8 месяцев назад
This video is a step-by-step guide on how to write unit tests to test functions in a pyspark job that works on the AWS Glue Service. This video will cover how to write sample dataset to test our glue job transformations to make sure they are doing what we are expecting. Buy me a Coffee - www.buymeacoffee.com/dataengu Tutorial Links: Configure Docker with AWS Glue Jobs - ruclips.net/video/-4ZnJk...
Why Data Engineers Should Develop AWS Glue Jobs Locally
Просмотров 6 тыс.9 месяцев назад
If you're a data engineer, developer, or anyone working with AWS Glue, you'll know that the process of building and testing ETL jobs can be complex and resource-intensive. However, there's a solution that offers more control, faster feedback, and greater flexibility in your development process - developing your AWS Glue jobs locally. I will cover the top reasons I think Data Engineers will bene...
Develop AWS Glue Jobs Locally Using PyCharm and Docker on Windows - step by step
Просмотров 7 тыс.9 месяцев назад
This Video is a step-by-step tutorial on configuring your Windows computer to work with Python Professional and Docker to run AWS Glue Jobs. This video will walk through how to configure AWS Glue 4.0. timeline 00:00 Introduction 01:11 Pull aws glue docker image 02:46 Configure Python interrupter with docker 04:16 Set up AWS Glue-related code completion suggestions 07:06 Configure Environment Va...
Query Redshift Table with SQL in Python | AWS SDK for Pandas
Просмотров 2,3 тыс.11 месяцев назад
This is a hands-on tutorial which walks through the step-by-step process on how to query data in a redshift database using a SQL statement. In this video, I'm going to first cover how we can connect to our redshift cluster in python, use sql to pull values from a single table, use a where statement to limit how much data we are querying, use python variables in our sql statements, and how to qu...
AWS Glue PySpark: Upserting Records into a Redshift Table
Просмотров 6 тыс.Год назад
This video is a step by step guide on how to upsert records into a dynamic dataframe using pyspark. This video will use a file from s3 that has new and existing records that we want to perform an upsert into our redshift table. github: github.com/AdrianoNicolucci/dataenguncomplicated/blob/main/redshift/Pyspark_Upsert_records_to_redshift.ipynb Related videos: ruclips.net/video/c7_1POi3KRc/видео....
Upsert Records To Amazon Redshift - AWS SDK for Pandas
Просмотров 1,8 тыс.Год назад
This is a step-by-step tutorial on performing an upsert on a pandas data frame to an Amazon Redshift table. This tutorial explains what methods we can use to achieve this and provides a real-world example with sample data. Related Redshift tutorials: Add Redshift Data Source In AWS Glue Catalog - ruclips.net/video/c7_1POi3KRc/видео.html AWS Glue PySpark:Insert records into Amazon Redshift Table...
Add Column To Redshift Table - With SQL
Просмотров 1,3 тыс.Год назад
This video is a walkthrough on how to add a column to an existing amazon redshift table using SQL. This video explains the code and what the data looks like before and after running SQL to add the new column and insert data github: github.com/AdrianoNicolucci/dataenguncomplicated/blob/main/redshift/add_column_to_table.sql #aws #redshift
AWS re:Invent 2022 Swami Sivasubramanian Keynote Summary of New Features & Services
Просмотров 1,1 тыс.Год назад
This video highlights the new feature and service announcements made by Swami Sivasubramanian during re:invent 2022 related to data & analytics Keynote session full video: ruclips.net/video/TL2HtX-FmiQ/видео.html #aws #reinvent2022
AWS Glue PySpark:Insert records into Amazon Redshift Table
Просмотров 11 тыс.Год назад
This video is a step-by-step guide on how to write new records to an amazon redshift table with AWS Glue Pyspark. add redshift table to AWS Glue Catalog: ruclips.net/video/c7_1POi3KRc/видео.html #aws #awsglue
AWS re:Invent 2022 Adam Selipsky Keynote Summary for Data & Analytics (New Features & Services)
Просмотров 2,4 тыс.Год назад
This video highlights the new feature and service announcements made by Adam Selipsky during re:invent 2022 related to data & analytics #aws #reinvent2022
Add Redshift Data Source In AWS Glue Catalog
Просмотров 7 тыс.Год назад
This video is about how to add tables from a redshift cluster into the glue catalogue so they can be used by other services. Timeline 00:00 Introduction 00:47 Add redshift database connection 02:04 Create aws glue crawler role 03:24 Create glue crawler 6:33 Configure VPC Endpoint #aws #awsglue
AWS Glue PySpark: Change Column Data Types
Просмотров 3,3 тыс.Год назад
This video is about how to change column data types in AWS Glue using PySpark. This tutorial will walk through how to achieve this using the resolveChoice method in a dynamic frame Code Example: github.com/AdrianoNicolucci/dataenguncomplicated/blob/main/aws_glue/Change_Data_Types.ipynb #aws #awsglue #pyspark
AWS Glue PySpark: Calculate Fields
Просмотров 2,2 тыс.Год назад
AWS Glue PySpark: Calculate Fields
AWS Glue PySpark: Unpivot Columns To Rows
Просмотров 2,3 тыс.Год назад
AWS Glue PySpark: Unpivot Columns To Rows
AWS Glue PySpark: Flatten Nested Schema (JSON)
Просмотров 13 тыс.Год назад
AWS Glue PySpark: Flatten Nested Schema (JSON)
AWS Glue PySpark: Drop Fields
Просмотров 1,8 тыс.Год назад
AWS Glue PySpark: Drop Fields
AWS Glue PySpark: Rename Fields
Просмотров 2,7 тыс.Год назад
AWS Glue PySpark: Rename Fields
AWS Glue PySpark: Filter Data in a DynamicFrame
Просмотров 8 тыс.Год назад
AWS Glue PySpark: Filter Data in a DynamicFrame
AWS Glue: Write Parquet With Partitions to AWS S3
Просмотров 16 тыс.Год назад
AWS Glue: Write Parquet With Partitions to AWS S3
AWS Glue: Read CSV Files From AWS S3 Without Glue Catalog
Просмотров 29 тыс.Год назад
AWS Glue: Read CSV Files From AWS S3 Without Glue Catalog
AWS Glue: Flex Jobs (New Feature Release)
Просмотров 1,2 тыс.Год назад
AWS Glue: Flex Jobs (New Feature Release)
Practical Projects to Learn Data Engineering On AWS
Просмотров 43 тыс.Год назад
Practical Projects to Learn Data Engineering On AWS
Orchestrate Glue Jobs With Step Functions
Просмотров 13 тыс.Год назад
Orchestrate Glue Jobs With Step Functions
AWS Glue Job Import Libraries Explained (And Why We Need Them)
Просмотров 17 тыс.2 года назад
AWS Glue Job Import Libraries Explained (And Why We Need Them)
Comparing JSON In AWS Lambda With Deltajson
Просмотров 6042 года назад
Comparing JSON In AWS Lambda With Deltajson
Author AWS Glue jobs with PyCharm Using AWS Glue Interactive Sessions
Просмотров 13 тыс.2 года назад
Author AWS Glue jobs with PyCharm Using AWS Glue Interactive Sessions
List of Tuples to Pandas DataFrame | Python Tutorial
Просмотров 2,1 тыс.2 года назад
List of Tuples to Pandas DataFrame | Python Tutorial

Комментарии

  • @Scott-s7f
    @Scott-s7f 19 часов назад

    nice video! what's the point of using jobs in notebooks since bookmarks aren't supported there? is there another benefit?

  • @gudguy1a
    @gudguy1a День назад

    (a bit late to this video) But still - okay, very good - well done. I now wish I had come across you a year ago - I see from your videos, you do a good job. NO massive amounts of annoying face time in the videos as too many others do. Clear and concise info in the videos AND great narrating voice. I went through a lot of pain to first, obtain the GCP Pro Data Engineer cert last year - but because of NOT having the REQUIRED "multiple years of hands-on GCP data engineering work experience" companies want AND not many GCP DE roles near me (not doing full remote yet) AND the fall tech layoffs - I had to rotate back to AWS world (after a decade as an AWS Cloud Architect/Engineer - renewed Architect Pro cert last year). Spent the past 5 months studying my hinie off and then gaining the AWS DE cert (the reason why I wrote my June 15th paper - tinyurl.com/48e6skab to help others out, to 'maybe' expedite their path...). Now the problematic deal is to find a company who will take me on with the double DE certs and without the REQUIRED "multiple years of hands-on data engineering work experience". My thoughts have been, I should be a magnet to multiple companies after gaining these two supposedly difficult DE certs... AWS was difficult and did not seem to be an Associate level cert (I've acquired other AWS Pro certs and this AWS Data Engineer cert most definitely seemed at that level - it pissed me off because I failed the exam back in April...).

  • @Nayak_Bukya_08
    @Nayak_Bukya_08 8 дней назад

    as I am working on glue, it super argent to me, the question is, how to add the source file name in the dyamaic frame, it would be great if you could respond on a priority basis. Thank you

    • @DataEngUncomplicated
      @DataEngUncomplicated 8 дней назад

      Hi, this is outside the scope of this video. I don't know if this is even possible to be honest. With dataframes the files are abstracted away from us. Please post on repost your question to see if someone can help you out. Also Google search or reading the docs can be your friend!

  • @Nayak_Bukya_08
    @Nayak_Bukya_08 9 дней назад

    how to get input file name of a record in spark dynamic dataframe ?

  • @blockchainstreet
    @blockchainstreet 13 дней назад

    Amazing Man!! Good one

  • @joelayodeji2533
    @joelayodeji2533 14 дней назад

    thank you for making videos like this.

  • @torpedoe1936
    @torpedoe1936 19 дней назад

    Thanks, sir !!!!!

  • @externalbiconsultant2054
    @externalbiconsultant2054 19 дней назад

    wondering if watching costs are really a data engineers activity?

    • @DataEngUncomplicated
      @DataEngUncomplicated 19 дней назад

      Yes, cost optimization is part of every role when working in a cloud environment. If you work for a large funded organization that isn't coming down on costs you might night feel it as much as a start up that freaks out for an extra $100 in cloud costs.

  • @bartstough8201
    @bartstough8201 20 дней назад

    Still a great overview. Makes everything a lot clearer. Thank you.

  • @dougkfarrell
    @dougkfarrell 27 дней назад

    This is fantastic! I'm new to AWS Glue and was really struggling to get traction developing an ETL script. Being able to develop locally, I don't really care about the costs, but the ability to debug, get feedback, and just the turnaround time to try things is amazing. Again, thanks. I'd like to ask you more questions, how can I do that?

    • @DataEngUncomplicated
      @DataEngUncomplicated 27 дней назад

      Thanks, feel free to post your questions here. Me or someone else might be able to help you out!

    • @dougkfarrell
      @dougkfarrell 27 дней назад

      @@DataEngUncomplicated Thanks! I'm using Glue ETL to read two different CSV files into Dynamic Frames, normalize and union them together. I need to write some SQL to an existing RDS MySQL database to query records to figure out if I need to update or insert data. Is there a good (as in fast) way to iterate over the normalized, unioned DynamicFrame and read and write to an RDS MySQL database? Thanks in advance for any help!

  • @admiralbenbow7677
    @admiralbenbow7677 27 дней назад

    i guess you forgot to show how to make a connection in pgadmin first

    • @DataEngUncomplicated
      @DataEngUncomplicated 27 дней назад

      Can you explain why you think you need to make a connection in pgadmin first? I walked through how to create the database connection in the glue catalog.

    • @admiralbenbow7677
      @admiralbenbow7677 27 дней назад

      @@DataEngUncomplicated sorry iam new to this so forgive me if i am asking silly questions, isn't the data stored locally on your computer so you have to make a connection there first if not how can glue find where it's and how it automatically recognized etldemo

    • @admiralbenbow7677
      @admiralbenbow7677 27 дней назад

      @@DataEngUncomplicated Oh silly me i got confused with data migration my bad😅

  • @nlopedebarrios
    @nlopedebarrios 27 дней назад

    Now AWS includes AWSSDKPandas-Python312 so it's easier to add pandas to your Lambda function. However, I'm getting "Missing optional dependency 'fsspec'. Use pip or conda to install fsspec." I've follow these steps to install the latest version, but failed: "Unable to import module 'lambda_function': No module named 'fsspec'". Any suggestions?

  • @rahulsrivastava9787
    @rahulsrivastava9787 28 дней назад

    The concepts in this video went inside my brain like a hot knife going in butter. Great video for someone like me who comes from a functional background. Great work...really appreciated.

  • @tejaswi3046
    @tejaswi3046 Месяц назад

    I am still facing with the numpy import error and even used AWSLambda-Python38-SciPy1x library , but unable to resolve , kindly let me know if any inputs

    • @DataEngUncomplicated
      @DataEngUncomplicated Месяц назад

      Strange, sorry it worked for me and others from the lambda layer I selected. Try selecting the specific version I had perhaps?

  • @SonPhan1
    @SonPhan1 Месяц назад

    i appreciate the really informative video! I followed everything and i'm stuck on running the pyspark code in the dev container environment. when i launch the dev container in the same/new window, i don't see the extensions in the container environment. The python interpreter doesn't show up either and when i go to the extension tab in the container environment, all the extensions are not installed. Is there additional configuration files in vs code i need to modify to enable the already installed extensions to run from the dev container?

  • @sylarguo7741
    @sylarguo7741 Месяц назад

    I'm a newcomer to AWS a data engineer, Your channels really help me! Appreciate it!

  • @gilang6128
    @gilang6128 Месяц назад

    love this

  • @onuabah3001
    @onuabah3001 Месяц назад

    You're an excellent teacher, keep it up...subscribing now

  • @asfakmp7244
    @asfakmp7244 Месяц назад

    Thanks for the video! I've tested the entire workflow, but I'm encountering an issue with the section on creating a DynamicFrame from the target Redshift table in the AWS Glue Data Catalog and displaying its schema. While I can see the updated schema reflected in the Glue catalog table, the code you provided still prints the old schema.

  • @user-bu1ct5kz9g
    @user-bu1ct5kz9g Месяц назад

    thank you man!!!

  • @DeepakSingh-of2xm
    @DeepakSingh-of2xm Месяц назад

    Can you please make a video showing practical implementation of event driven architecture using event bridge, sns, sqs and lambda? Thank you!

    • @DataEngUncomplicated
      @DataEngUncomplicated Месяц назад

      I'll add it to my backlog. Thanks for the suggestion.

    • @DeepakSingh-of2xm
      @DeepakSingh-of2xm Месяц назад

      @@DataEngUncomplicated Thank you, appreciate it. can we create the infrastructure using terraform if possible ?

  • @joudawad1042
    @joudawad1042 Месяц назад

    one of the best channels on youtube for data engineering! great content, keep the great work

  • @DeepakSingh-of2xm
    @DeepakSingh-of2xm Месяц назад

    Can you please make a video showing implementation of event driven architecture using event bridge, sns, sqs and lambda? Thank you!

  • @SimonLopez-hj2cj
    @SimonLopez-hj2cj Месяц назад

    how do i get to personalize the message that sns sends?

    • @DataEngUncomplicated
      @DataEngUncomplicated Месяц назад

      In the sns step there should be a box where you can customize the message

    • @SimonLopez-hj2cj
      @SimonLopez-hj2cj Месяц назад

      @@DataEngUncomplicated then how do i use the parameters of the job? for example if i want to send "The job state is (~SUCCEDED~ or ~FAILED~). At this time ~endtime~ ", thanks

  • @victorsilva9000
    @victorsilva9000 Месяц назад

    can lakeformation be created from IaC? like cloudformation/terraform?

  • @user-ij4ih8qp3e
    @user-ij4ih8qp3e Месяц назад

    Thank u so much. Your tutorial helps me a lot.

  • @muralichiyan
    @muralichiyan Месяц назад

    Data bricks glue are same?

    • @DataEngUncomplicated
      @DataEngUncomplicated Месяц назад

      If you're asking if databricks and glue are the same then no they definitely are not.

  • @prabhathkota107
    @prabhathkota107 Месяц назад

    I able to run successfully with glueContext.write_dynamic_frame.from_options & glueContext.write_dynamic_frame.from_jdbc_conf but I see some issue with glueContext.write_dynamic_frame.from_catalog as below: Getting below error: Error Category: UNCLASSIFIED_ERROR; An error occurred while calling o130.pyWriteDynamicFrame. Exception thrown in awaitResult: SQLException thrown while running COPY query; will attempt to retrieve more information by querying the STL_LOAD_ERRORS table Could you please guide

    • @DataEngUncomplicated
      @DataEngUncomplicated Месяц назад

      These errors can be tricky and require you to look into the logs further to see what is causing it. It sounds specific to your data.

  • @stevenjosephceniza8245
    @stevenjosephceniza8245 Месяц назад

    Thank you for this guide! I tried using pycharm and my old computer cannot handle it. I almost purchased for a subscription.

    • @DataEngUncomplicated
      @DataEngUncomplicated Месяц назад

      Hi, I'm not sure what you mean it almost purchased a subscription. But you need pro to use docker in pycharm.

  • @prabhathkota107
    @prabhathkota107 2 месяца назад

    Beautifully explained about the setup. Understood how docker works as well. thanks a ton, Subscribed.

  • @prabhathkota107
    @prabhathkota107 2 месяца назад

    Docker option not available in PyCharm community edition I guess

  • @prabhathkota107
    @prabhathkota107 2 месяца назад

    Didnt understand why cost incured? As its running locally & why to keeep nodes set to 2

    • @DataEngUncomplicated
      @DataEngUncomplicated 2 месяца назад

      Your ui is local but there is still a spark cluster running in AWS.

  • @prabhathkota107
    @prabhathkota107 2 месяца назад

    Very much helpful. Thanks

  • @mihaicosmin866
    @mihaicosmin866 2 месяца назад

    Is possible to do the same thing but with lines features?

  • @jovidog9573
    @jovidog9573 2 месяца назад

    Hello. I made a Glue Job that performs ETL changes to data in an S3 Bucket and exports the changed data to a Redshift database, but now I'm thinking of changing from Redshift to PostgreSQL. I know this video is for importing RDS data into Glue, but if I follow the video's instructions, would I also be able to export it back into RDS?

    • @DataEngUncomplicated
      @DataEngUncomplicated 2 месяца назад

      Hi, This video is only about how to add an RDS data source like postgres to AWS Glue Catalog. So if you establish your postgres database connection, you should be able to read and write data to it.

  • @jzevakin
    @jzevakin 2 месяца назад

    Thank you!!

  • @rohithreddy41
    @rohithreddy41 2 месяца назад

    thank you for the video. I am unable to run the program because i do not see the run button after clicking "attach in current window"

    • @rohithreddy41
      @rohithreddy41 2 месяца назад

      I had to install python in the container and now i see the run button.

  • @user-vb7im1jb1b
    @user-vb7im1jb1b 2 месяца назад

    Thanks for this tutorial! I have a question: How do I store one big parquet file in s3 without running with Kernel dying issues? I have already use df.convert_dtypes() but it is still failing. My file has over 1.5 million rows. Files below 900k rows are not failing! Thanks

  • @bk3460
    @bk3460 2 месяца назад

    sorry, what is wrong with df = spark.read.csv(path)?

    • @DataEngUncomplicated
      @DataEngUncomplicated 2 месяца назад

      That works too but it's not using the aws glue library to do it.

    • @bk3460
      @bk3460 2 месяца назад

      @@DataEngUncomplicated Sorry, I'm new to Spark and Glue. Would you mind to elaborate about glue library are you referring to? I know about Glue Data Catalogue, but it is not affected when I use df = spark.read.csv(path).

    • @DataEngUncomplicated
      @DataEngUncomplicated 2 месяца назад

      Give a read on the aws glue api and the transformations that come with it: docs.aws.amazon.com/glue/latest/dg/aws-glue-api.html

  • @ajprasad6865
    @ajprasad6865 2 месяца назад

    Clear and concise

  • @mickyman753
    @mickyman753 2 месяца назад

    Just found your channel. can we have a complete playlist , a type of course or a oneshot video/videos, your explain in depth and I found your videos better than the other tutorials on youtube

    • @DataEngUncomplicated
      @DataEngUncomplicated 2 месяца назад

      Thanks! Check out my playlists I have various ones for each AWS service I have made videos for. It sounds like that's what you are looking for.

  • @kckc1289
    @kckc1289 2 месяца назад

    How would you recommend local dev and organization -> uploading to AWS for scripts with multiple files ?

    • @kckc1289
      @kckc1289 2 месяца назад

      Do you have a Github for this Pytest example?

    • @DataEngUncomplicated
      @DataEngUncomplicated 2 месяца назад

      Hey, checkout my videos on local development for AWS glue. I covered topics like using interactive sessions, pycharm and vs code with a docker container with AWS glue. In order to upload them, I recommend managing them with IaC with terraform or cdk.

  • @higiniofuentes2551
    @higiniofuentes2551 2 месяца назад

    Thank you for this very useful video!

  • @user-gi2kp9hz5u
    @user-gi2kp9hz5u 2 месяца назад

    how to use python in FME?

    • @DataEngUncomplicated
      @DataEngUncomplicated 2 месяца назад

      You need to use the python caller or python creater transformer.

  • @renyang2320
    @renyang2320 2 месяца назад

    Your functions based job is quite straightforward. Would you like to organize your glue job in a Python class?

    • @DataEngUncomplicated
      @DataEngUncomplicated 2 месяца назад

      I made the script just for this RUclips video, sure things could be organized into classes if it makes sense?

  • @DaveThomson
    @DaveThomson 2 месяца назад

    Do you do any consulting?

    • @DataEngUncomplicated
      @DataEngUncomplicated 2 месяца назад

      Hey David, I'm actually a full-time AWS D&A consultant for a company that is an AWS partner. Let me know if you want to chat.

    • @DaveThomson
      @DaveThomson 2 месяца назад

      @@DataEngUncomplicated I would like to chat. I too work full time for a partner.

    • @DataEngUncomplicated
      @DataEngUncomplicated 2 месяца назад

      @@DaveThomson Great, feel free to contact me through the email I have posted on my channel.

    • @DaveThomson
      @DaveThomson 2 месяца назад

      @@DataEngUncomplicated sent you an email.

  • @Angleito
    @Angleito 2 месяца назад

    how do you add third party python libraries ?

    • @DataEngUncomplicated
      @DataEngUncomplicated 2 месяца назад

      I don't know an elegant way to do this but you can go into the docker container and install the python libraries you need directly that way.

  • @Angleito
    @Angleito 2 месяца назад

    does this work with the debugger?

  • @VijayKumar-tr8ki
    @VijayKumar-tr8ki 2 месяца назад

    Thank you for the great work. I am new to Glue and your videos are great help. I was able to create derived column based on this video like new column - total_amount which is equal to price * quantity. Now in next step I want to categorize the customers based on total_amount i.e. if total_amount <300 then "small" else if total_amount >=300 and total_amount<500 then "Medium" else "large. I have defined function for this by passing dynamic dataframe but when i am executing that I am just getting root. ALso, if I run the whole script it is completing without any error but also not giving desired result. Could you please confim if we can use this conditional logic to get the derived column. Thank you in advance.

  • @harshadk4264
    @harshadk4264 3 месяца назад

    Do you use the Factory Design pattern?