TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

936

Background Introduction Self Introduction Hello. My name is Li Lin from the DevOps Team of the KINTO Technologies Global Development Group. Until 2017, I worked in China as an engineer, project manager, and university lecturer. In 2018, I started working in Japan. I’m a working mother of two, balancing my job while actively reskilling. Meet our DevOps team The Global Development Group's DevOps team started its operations this year. Our team is international, and the DevOps Team members speak Japanese, Chinese, and English as their native languages. We make sure to communicate smoothly by considering each member’s language skills. As a new team, each member has different experiences, but we always cooperate actively when facing challenges. I believe our teamwork is going well. DevOps Team Responsibilities Currently, there are multiple teams within the Global Development Group. The DevOps Team acts as a common team overseeing the entire Global Development Group. Our specific responsibilities are as follows: Task Work Content Formulate Global team deployment standards for CI/CD and development environment (Git/AWS/Grafana/SonarQube, etc.). Establish deployment standards for common components across Global teams. Improve common DevOps practices within the Global teams. Collect feedback on these tasks, and implement PDCA. Provide customized support individually. For requests not listed above that are not applicable to all groups, we assess their urgency and necessity, and then consider and support the implementation measures. Generally, the DevOps Team provides support, while the Application Team handles implementation. Error Resolution Support DevOps helps resolve errors during CI/CD processes and environment usage. Improve DevOps and AWS knowledge within the group. Conduct study sessions and handle individual inquiries. Contact point with the Platform Group DevOps Team handles inquiries between the Global Development Group and the Platform Group, collects feedback, and establishes operational standards for the groups. Standardization of Operational tasks Establish standards for operational tasks. Some tasks are outsourced to external vendors. Cost monitoring and policy setting. Optimize environment cost. Inquiry correspondence. Accept the inquiries mentioned above. Target audience of this article This article is intended for experienced developers who are considering or have already implemented Flyway. When I first started using Flyway, I did some research online but found that there was very little information providing an overall picture. This article serves as a proposal for introducing Flyway. I would be honored if you find the information helpful. Introducing Flyway What is Flyway? Flyway is an Open-Source database migration tool. It makes it easy to version control databases across multiple environments. The applicable scenarios for each command are as follows: Baseline Running the Baseline command creates the initial version for Flyway. The default version of Baseline is "1". In the Community Edition, you can create a baseline only once. It cannot be updated. If some tables already exist in the target database, you must run Baseline. Otherwise, the Migrate command will result in an error. [Scenario] Step 1) Set the version of the already applied SQL scripts to a number smaller than "1" before introducing Flyway. Step 2) Execute the Baseline command 3) Execute the Migrate command. As a result, SQL scripts with a version number of "1" or higher will be applied. [Reference] Baselines an existing database Clean The Clean command completely clears the target schema. Since this makes the schema empty, you must implement measures to prevent it from being used in production environments. [Scenario] If you want to revert to the initial version, you can do so by following the steps below. Step 1) Run the Clean command Step 2) Run the Migrate command [Reference] Wiping your configured schemas completely clean Info Flyway information is displayed. This command allows you to verify if Flyway can connect to the database. [Scenario] After execution, the following information is displayed (example): | Category | Version | Description | Type | Installed On | State | +-----------+---------+-------------+------+--------------+---------+ | Versioned | 00.01 | initial | SQL | | Pending | | Versioned | 00.02 | initial | SQL | | Pending | +-----------+---------+-------------+------+--------------+---------+ [Reference] Prints the details and status information about all the migrations Migrate Applies new SQL files that have not yet been applied. This is the most commonly used command. It is used every time the database needs to be updated to a new version. [Reference] Migrates the schema to the latest version Repair Removes the execution history of the SQL scripts that resulted in errors. However, the execution results cannot be removed. The Repair command only removes the execution history of failed SQL scripts from the flyway_shema_history table (Flyway's version control table) in the database. The following situation is common: In such cases, carefully check which SQL scripts were applied and make sure all scripts are applied correctly. If a single SQL file contains multiple SQL scripts and an error occurs, the scripts before the error will be applied, while those after the error will not be. [Scenario] [Example] When you are applying V01_07, V01_08, and V01_09, if V01_07 and V01_08 succeed but V01_09 fails, you can take the following steps. Step 1) Fix V01_09 Step 2) Execute the Repair command Step 3) Run the Migrate command again [Reference] Repairs the schema history table Validate This command checks whether the SQL scripts in the project have been applied to the database and also checks if the versions match. You can also use it to verify that the current database matches the version in the cloud. [Reference] Validates the applied migrations against the available ones Background to Flyway's implementation If you don’t use a tool like Flyway, you will need to log in to a bastion server for the database and run update scripts every time you deploy. Most of the Global Development Group's services are composed of microservices. As the number of environments grew, the traditional method of updating databases via bastion servers became increasingly burdensome and risky, leading to operational challenges. These circumstances led us to consider introducing Flyway. Initially, we tried introducing a job that could execute commands in a GitHub job via Lambda on AWS. When we actually tried using it, we encountered the following issues: If you migrate to AWS without sufficiently verifying the SQL scripts in a local environment, the migration may fail, making recovery difficult. If you update the database manually without building a Flyway environment in your local environment, there is a high risk that the structure will differ from the database on AWS. With the above issues in mind, during the first PDCA cycle, we implemented the Flyway system as shown below. Flyway implementation method by KINTO Technologies Global Development Group To use Flyway in a Spring Boot application, we implemented the following functions: Flyway is integrated directly into the application Usage Timing: Migrations are executed automatically when the application is started locally and when it is deployed to AWS. Purpose: This allows SQL migration scripts to be tested locally, and automates the migration process reducing manual effort. Introducing the Flyway plugin Usage Timing: During local development. Purpose: To run Flyway commands using the plugin if automatic migration cannot be preformed locally. GitHub job implementation for Flyway commands Usage Timing: When automatic migration cannot be performed during deployment to AWS, Flyway commands are executed using a GitHub job. Purpose: To enable the execution of Flyway commands without logging into AWS Next, I will introduce the final configurations for each implementation. Integrating Flyway into the application By integrating Flyway into the project, you can achieve the following: Databases in each environment are automatically migrated after the application starts. Migration SQL scripts are validated in local environment before migrating to the AWS database. The details are as follows: By running the following command, you can start a MySQL Docker image locally. Once the application starts, the latest SQL scripts will be automatically migrated. docker-compose up -d ./gradlew bootRun Introducing the Flyway plugin You can also maintain the local database manually using Flyway commands. By using the plugin as shown below, you can execute these commands Introducing GitHub jobs that can execute Flyway commands Once deployed on AWS, the database can be automatically migrated to Aurora. However, if this does not occur, you will need to run the Flyway command manually. Flyway commands are executed via Lambda on AWS. The configuration diagram is as follows: The flow from executing GitHub job to completing Flyway execution is as follows: Upload the execution file from the GitHub job to S3. Extract the necessary parameters from the payload (JSON). Use AWS CLI to extract information required for Flyway execution. Retrieve the zip file containing SQL scripts from the S3 bucket. Execute Flyway (using a Docker image on Lambda). Place the results in the S3 bucket. The image below shows the process when executing the command on GitHub. We have built this system so that it can be run without logging into AWS. This setup allows the following for each environment: Databases in each environment are automatically migrated after the application starts. Migration SQL scripts are validated in the local environment before migrating to the AWS database. Tools for executing Flyway commands are provided in each environment. Using Flyway has brought the following benefits: Deployment time was significantly reduced (by more than half) Eliminating database discrepancies between environments reduced unnecessary bugs and misunderstandings during development. The workload required for managing database versions in each environment was minimized (as long as the version was clearly indicated by the SQL script name). Testing and reviewing can prevent incomplete queries from being executed. No need to log in to a jump server built on AWS to perform operations. Of course, when using Flyway, there are some precautions: If there are many developers, decide on a consistent method of use. Troubleshooting and recovery from errors can be time-consuming. Theoretically, the above mechanism also allows you to start up a database while GitHub Actions CI/CD jobs are running, but we have not yet verified this. I am also considering using Flyway to build a database for automated CI/CD testing. While there are many benefits to using Flyway, it has also caused some issues. I believe there is room for improvement by using the PDCA cycle of usage standards. By gradually introducing Flyway depending on the environment and usage scenario, it can be used more safely and efficiently. If you're interested, we encourage you to give it a try.
アバター
はじめに こんにちは。モバイル開発グループでiOSチームのチームリーダーをやっている中口と申します。 普段の業務では、 KINTOかんたん申し込みアプリ Prism Japan( スマホアプリ版 / 最近リリースされたばかりのWeb版 ) のiOS開発を担当しています。 早速本題ですが、2024年8月22日(木)-24(土)で開催されたiOSDC Japan 2024の振り返りイベントとして、2024年9月9日(月)に 【iOSDC JAPAN 2024 AFTER PARTY】 を開催しましたので、なぜ開催したのか、開催するまでに取り組んだこと、イベント当日の様子などを振り返りたいと思います。 特に、「なぜ開催したのか」の部分については、持論を展開しますが多くの方に共感いただければ嬉しいです。 こちらのブログは 本イベントに参加された方 iOSDCに参加された方 イベントによく参加する方、参加してみたい方 イベントの主催をしている方、主催をしてみたい方 などに読んでもらえたら嬉しいです。 また私自身、本イベントを開催したことでモチベーションが爆上がりしましたので、この思いを多くの方に共有したいと思い、テックブログとして執筆いたします。 なぜイベントを開催したのか こちらのイベントは、私の中で4月ごろから計画がありました。 では、なぜ計画したの?といわれたら、正直なところちゃんと言語化できていなかったと思います。 昨年の10月にチームリーダーという役割になって以降、iOSに関するイベントだけでなく、開発生産性、組織マネジメント、エンジニアリングマネージャーなどのイベントを中心にその他にも気になったイベントにはたくさん聞きに行くようにしました。 その中で、下記のような感情が湧いていることに気がつきました。 イベントに参加するとモチベーションがめっちゃ上がるなー イベントで登壇している人とか開催している人ってめっちゃカッコ良いなー なので、4月ごろの気持ちとしては、強いて言語化するのであれば「なんかカッコ良いし自分もイベントやってみたい!!」といったところでした。 ただ、お金や時間や人など多くの資源を投入して開催するイベントの目的が、「カッコ良いから」では説明がつきません。。。 その後、イベントをする意義について、自分の中で苦悩する日々が始まります。。。。 イベントを開催し終わった今でも、明確な答えには辿り着けていないと思います。 (こんな曖昧な状態でイベントを開催させていただいたことに感謝しかありません) 組織に属しながらイベントを開催する以上、やはり何かしら求められます。 よく言われることとしては「組織のプレゼンスを上げる」、「サービスを普及する」、「採用に繋げる」などでしょうか。これらは、すべて正しくイベントを開催する大きな意義だと思いますし、それらが結果として現れればそのイベントは大成功と言えると思います。 ただ、個人的にはちょっとしっくりきていない部分があります。IT業界におけるイベントでは、参加者の多くは「新たな知識を身につけたい」、「人脈を広げたい」、「イベントに参加すること自体が楽しい」などの自己成長等を目的としてイベントに参加される方が多いと思っており、主催者がどんな組織か知りたい、どんなサービスを出しているか知りたい、その会社に転職したい、などを理由にイベントに参加している方はごく稀だと考えます。 そういった中で、イベントをする意義について悩んだ末、自分なりの結論に達しました。 それは 「一人でも多くの方にモチベーションを伝染したい」 ということでした。 上記でも述べたとおり、イベントに参加すると「モチベーションがめっちゃ上がるなー」という感情になるのですが、これは私以外にも多くの方が実感するのでは無いかと思います。 明日からもっと仕事を頑張ろう、という人が一人でも増えれば、その積み重ねが世の中を良くすることにつながっていくと考えます。 また、モチベーションが上がることによって、私のようにイベントを開催してみたいとか、登壇してみたいという人が出てくるかもしれません。そして、それを見てまた別の人が主催や登壇をしてみたいと思うかもしれません。このように良いモチベーションはきっと伝染すると考えています!。 ということで、現段階ではイベントをする意義として、 「一人でも多くの方にモチベーションを伝染したい」 という思いをもって、本イベントを開催させていただきました(思いついた4月時点ではここまで整理できていませんでしたが)。 (そして、とはいえ組織的な観点で見ると、モチベーションが上がる、という理由だけでイベントをバンバンやろうよ、とはならないので苦悩の日々はまだまだ続きそうです。) 続きまして、本イベントの概要についてご紹介いたします。 イベントの概要 イベント名:iOSDC JAPAN 2024 AFTER PARTY 日時:2024/09/09(月) 19:00〜 参加者:20名前後 ウェルスナビさん、TimeTreeさん、弊社の3社にてiOSDCを振り返る会として合同開催をいたしました。 各社から1枠ずつの計3本のLT + 各社から1名ずつの計3名で実施するパネルディスカッションを行いました。 それでは、本イベントの開催に至るまでをご紹介いたします。 イベントを開催するまで 4月ごろにモバイル開発関連のイベントを開催してみたい、と思い立ったのですがどうやって開催すればいいのかなぁと悩みました。 弊社には、イベント運営をサポートいただける技術広報グループ(DevRel)がありまして、こちらにサポートをお願いすればイベントを円滑に運営することは問題ないだろう、と思っていました。 一方で、 集客 登壇者の募集 イベントのテーマ決め などは、技術広報グループのサポートがあっても難しい部分だろうな、と思ったので弊社1社でモバイル関連のイベントを実施することは、難しいと判断いたしました。 そんな中で、イベントの開催に非常に力を入れており、集客や登壇者募集のノウハウもたくさん持っていらっしゃるであろう、Findyさんにお力を借りたいと思ったため、 5月に開催されたこちらのイベント に伺ってきました。 その時の イベント参加レポート もブログにしていますので併せてご覧ください。 こちらのイベントをきっかけに、Findyさんの担当者と情報交換をさせていただくことができるようになりました。その後、どんなイベントを開催するか議論を重ねた結果、ウェルスナビさん、TimeTreeさんをご紹介いただき、iOSDCの振り返りイベントをやってみよう、という運びになりました。 イベント運営にさまざまな助言・ご協力をいただいたFindyさん、イベントを合同開催いただいたウェルスナビさん、TimeTreeさん、心より感謝申し上げます。 3社にてiOSDCの振り返りイベントを実施してみよう、となってからは、 イベントの座組はどうするか 登壇者やパネルディスカッションのパネラーはどうするか 開催日時 など、さまざまなことがスムーズに決定していきました。 Connpassによるイベント募集ページも無事に完成したところで、次は参加者の募集です。 今回は、イベント参加者とのコミュニケーションに重点を置きたい、という点も3社で共通していたことから、オフラインのみのイベントとしておりました。弊社のイベントスペースで開催としたのでキャパ的に30名くらいの募集を目標にしておりました。 2024/08/08(木)にConnpassページを開設して、数日の間に10名程度の参加登録があり、まずまずの参加数だなと思っていました。ただ、イベントPRの本番はiOSDCが開催される08/22(木)-24(土)の期間で、ここでどれだけ参加数を伸ばせられるかだと思っていました。今年は、弊社が初のスポンサーブースの 出展があるので、ここでしっかりとPRをしたり、弊社の公式XでもiOSDC期間中に複数回にわたってPRのポストをしたりと、かなりイベントの告知に力を入れました。 その結果、iOSDC期間中に増えた参加登録は、なんと 「0人」 でした、、、、 *正直、イベントの参加者募集をナメていました、、、* スポンサーブースでのイベントのPR方法は、振り返って考えてみると改善の必要がありそうだなと思いまいた。ただチラシを配るだけでなく、その場で登録いただくような動線(例えば、参加登録頂いた方にノベルティをお配りするなど)をもう少し検討しておくべきでした。 こちらは、次回以降の反省です。。。 実際に、Connpassにてイベントページの統計確認したところ、08/22(木)-24(土)の間に登録数が全くいない事や、ページビューも全く伸びていない事が見て取れるかと思います。 connpassにて確認した統計 その後は、9/9(月)までの期間、上記画像に示すようなペースで少しずつ参加者の登録をいただいたり、私自身が他社様のイベントに参加した際に、告知のお時間を頂くことができたりなどの効果で、当日時点で24名の参加登録を頂くことができました。 やはり、テーマとして、「iOSDCの振り返りイベント」としたことは一定の集客効果のあるテーマだったと感じました。 ということで、当初目標にしていた30名の参加登録には達成していませんでしたが、個人的には初主催のイベントで十分すぎる登録者数だと感じておりました。 あとは、当日を迎えるだけです。 イベント当日 こういったイベントにはさまざまな事情により当日キャンセルは付きものです。 実際に、本イベントも残念ながら当日キャンセルになってしまった方数名いらっしゃいました。 ただし、私自身は当日を迎えたこのタイミングで参加者の増減に一喜一憂している余裕はありませんでした。 合同開催いただいたウェルスナビさん、TimeTreeさん、および当日ご参加いただいた、参加者の皆様にとって参加して良かった、と思えるイベントにすることに集中しておりました。 ここからは、当日の様子を簡単に振り返っていきます。 緊張しながら皆様が到着されるのを待ちます。会場のセッティングが完了した様子です。 無事に会場のセッティング完了 19時になり、ウェルスナビさん、TimeTreeさん、参加者の皆さんが揃ったので、いよいよ1枠目のLTが始まります。 ウェルスナビ 牟田さんによる「Package.swiftから始めるDX」です。 牟田さんの発表 Swift Package Managerの基本から解説いただき、知っているようで知らないことなどもあり非常に勉強になりました。ウェルスナビさんにおける取り組みや、今後目指す姿などもご紹介いただき他社の取り組みが聞ける貴重な機会だなと思いました。また、今後控えているSwift6に向けての解説などもいただき勉強になりました。 続いて2枠目です。 TimeTree 坂口さんによる 「iOSDCのプロポーザルを形態素解析してトレンドの変遷を探ってみた」 です。 坂口さんの発表 こちらの発表は、タイトルを見た時点から非常に気になっていました。過去数回iOSDCには参加させていただいておりますが、やはりセッションには一定の流行があるような気がしておりまして、それがプロポーザルにもしっかりと反映されていて興味深かったです。また、この解析ツールをXcodeで自作されており、発表中にシミュレーターで実演されていたのも見ていて楽しかったです。 続いて3枠目です。 KINTOテクノロジーズ 日野森さんによる 「iOSDC初出展までにした事を共有したい」 です。 日野森さんによる発表 弊社がスポンサーブース初出展だったこともあり、準備期間における苦労話などを共有いただきました。私も一部の展示物で準備に携わったのですが、どんなコンテンツが来場者の方に受けるのか、どうすれば見やすいのか、など答えがない中で試行錯誤していくのはとても大変でした。 またスポンサーとして制作したものを、 こちらのテックブログでも 詳しく紹介されているのでぜひご覧ください。 次に、休憩や乾杯を挟んでパネルディスカッションを行いました。 パネラーは ウェルスナビ 長さん TimeTree masaichiさん キントテクノロジーズ 日野森さん の3名に登壇いただき、モデレーターとして、私が進行をいたしました。 パネルディスカッションメンバー こちらのテーマをあらかじめ用意しておき、iOSDCを振り返っていきました。 テーマに関しては、事前にパネラーの皆さん、どう言った内容が興味があるか、などをヒアリングしつつ決定させていただきましした。 パネルディスカッションのテーマ 時間の都合で全部のテーマをディスカッションできなかったのですが、話題を見つつその場の流れにあったテーマをピックアップしながら進めるように意識して進行いたしました。 各社のiOS開発の状況や、iOSDCに向けての取り組み、例年と比べての今年の変化などをお話しいただきました。 パネラーの皆様です 最後に参加者全員で集合写真を撮影いたしました。 集合写真 会が終わっての感想 冒頭でも述べた通り、本イベントは4月ごろより計画を立てて開催に至ることができました。無事に会が開催できるだろうか、参加者は集まるだろうか、当日の司会進行はうまくいくだろうか、などイベントが終了するまで常に不安を感じながら準備をしました。 その中で、合同開催のウェルスナビさん、TimeTreeさんのご協力があったり、弊社技術広報グループや当日に運営スタッフを引き受けていただいた方のご協力もあり、私個人としてはとても満足感の高い会を開催できたと感じております。もちろん当日ご参加いただいた皆様にも会を大いに盛り上げていただきました。 本イベントに関わっていただいた全ての皆様に心より感謝をお伝えしたいです。 ●良かったところ ウェルスナビさん、TimeTreeさん、Findyさんなどイベント開催にあたって他社様とのつながりが持てたことはとても貴重な事でした。 また、私自身初のイベント主催でしたが、無事完了できたことで自信が持てました。 ●今後改善していきたいところ 途中でも述べたように、集客面は非常に難しいなと感じました。現状はまだ良い打手が見つかっていないため、次回運営をする際は関係者含めてしっかり検討していきたいです。 また、弊社のiOSチームのメンバーにもっと本イベントに参加頂きたかったです。今回は、LTおよびパネラーとして弊社からはアシスタントマネージャーの日野森さんが登壇したのですが、日野森さんは普段から登壇やイベント参加の機会が多く、本イベントに関しては普段もっと登壇機会が少ないメンバーにチャレンジしてもらいたいという思いがありました。しかし、社内にて募集をしてみたところメンバーからの申し出が無かったため、日野森さんに登壇頂くことになりました。 私自身も社内の募集の段階で、もっと登壇のハードルを下げる工夫をしたり、登壇に向けた準備のサポート体制を整えたりなど、今後の大きな改善ポイントだなと感じております。 最後に 10月にはDoroidkaigi2024の振り返りイベントもウェルスナビさん、TimeTreeさんと3社で開催することが決まっていたり、今後も同様の座組で不定期にこのようなイベントを開催していきたいと考えております。 冒頭で「一人でも多くの方にモチベーションを伝染したい」と述べましたが、本イベントを通して一番モチベーションが上がったのは他でもない私自身だと感じています。 参加者の皆様の方でも、モチベーションが上がったよ、と感じてくれている方がいればこの会は大成功だったのではないかと思います。 今後もこのようなイベントの開催含め様々な活動を通して、関わる全ての人のモチベーションアップにつながる活動をしていきたいと考えています。
アバター
Introduction Hello. I am Nakaguchi from KINTO Technologies, Mobile App Development Group. I participated in the event TechBrew in Tokyo Facing Technical Debt in Mobile Apps , held on May 23, 2024. I would like to report on the event. The event day The venue was at the Findy where their office was newly renovated . I had heard the rumors, but seeing the spacious and beautiful event space in person was exciting 😀 True to the name “TechBrew,” there were plenty of alcohol and snacks available, and the atmosphere was very relaxed. However, since I had an LT (Lightning Talk) presentation later, I refrained from drinking alcohol until the presentation was over👍 1st LT "Steps to evolve Bitkey's mobile app" They shared the history of Bitkey's mobile app up to the present day. The app was originally built with React Native, but it was evolved through transitioning to native development, implementing SwiftUI, and then adopting TCA. However, they said that the SwiftUI implementation is still a work in progress and might have been a mistake. They faced challenges because the behavior of SwiftUI changes depending on different iOS versions, which was something I could relate to from my own experiences. During the LT, the comments that really stood out to me were, "Everything we thought was good was the right choice," and "The decisions we made at that time were probably the right ones." It made me realize how true it is. I also had the opportunity to chat with the presenter, Ara-san, during the social gathering after the LT. We talked about many things, including Swift on Windows, and I learned a lot of new information. It was a very enjoyable conversation. 2nd LT "Approaching technical debt in mobile apps as a whole company" They discussed what technical debt is and how to tackle it. One of the speakers highlighted the need to distinguish between: Debt we are aware of, but accept it to gain returns. Debt we are unaware of, or that became debt due to changes in the circumstances. They mentioned that the former is manageable, but the latter can become problematic if ignored for too long. To address technical debt, they stressed the importance of negotiating time to resolve it, even if it means pausing business tasks. They emphasized that technical debt is a shared problem, involving not just the development team but all stakeholders, which I also agree with. I feel that such negotiation skills are especially important for engineering managers and team leaders. They also mentioned that they use FourKeys to visualize the situation, but warned against focusing too much on numerical goals. I also feel the same that visualizing a team's development capability is challenging, and I am careful not to rely too much on frameworks like FourKeys. 3rd LT "How to deal with technical debt in Safie Viewer for iOS" The presentation covered the challenges and strategies in developing their app that has been around for 10 years. The app still uses many technologies from its initial release, and while there is a desire to re-architect, the current system is stable and capable of adding many new features. As a result, they were unable to justify the time-consuming refactoring, and were unable to take any action to eliminate the debt. Currently, they are addressing the issues by doing what they can, with the following two main policies. Take immediate actions if possible: Updating to the latest Xcode version as soon as it is released. There is code that cannot be written unless the version is upgraded, which leads to creating legacy code. Implementing Danger A steady approach Currently using MVC/MVP Asynchronous processing is closure-based Re-architecting from this state is risky. Test new features with modern technology. I thought it made sense that to actually get started, you need to draw up a specific schedule. I'm often hesitant about major refactorings, so I’ve learned the importance of setting a clear schedule and sticking to it. 4th LT "Ensuring safe mobile development with package management" Like LT3, this talk also focused on an app with a long history of 8 years. They discussed how they addressed technical debt by focusing on commonization and separation. A recent challenge they face is excessive commonization . For example, their Channel data has around 100 parameters (borrowing the speaker’s terms), and there are many situations where they end up with data that is not used every time. On the other hand, they warned that excessive separation of responsibilities can also be problematic. There were instances where functions were separated even though they were only called from one place, leading to an overdone state. The importance of "thoughtful commonization" and " thoughtful responsibility separation" left a strong impression on me, and I realized I might have separated things without much consideration. They also explained that it is a good idea to manage these issues using Package Manager and introduced some ideas and methods for doing so. 5th LT "Tackling technical debt with GitHub Copilot" This was my presentation. You can find the presentation content here . I discussed the use of GitHub Copilot in Xcode. Compared to VScode, which officially supports GitHub Copilot, Xcode still has many limitations, and its usage rate is not growing as much. However, I found that Xcode's Chat function can significantly help in addressing technical debt, so I focused on that in my presentation. During the presentation, I demonstrated the Chat function, and I felt that the attention of the entire audience became even more concentrated. I was very happy that everyone seemed to be listening with interest. This was my first time speaking at an event outside of our company, but everyone in the audience listened warmly, and I was able to complete my presentation without any problems. Conclusion After the LT sessions, there was a social gathering where I had the opportunity to exchange information with many attendees. It was a very stimulating experience, and I felt motivated to continue participating in and speaking at such external events in the future. I also had a chance to speak with Takahashi-san, the organizer of the event. We discussed how great it would be to hold a joint event between our Mobile App Development Group and Findy. I look forward to actively pursuing such collaborations. As a souvenir, I received a bottle of IPA brewed by Findy!
アバター
The first commemorative request This is HOKA from the Manabi-no-Michi-no-Eki (Learning Roadside Station) team. In February 2024, during our monthly company-wide meeting, we announced the launch of our "Manabi-no-Michi-no-Eki (Learning Roadside Station)" initiative. Following this announcement, Nakaguchi-san from the Mobile App Development Group’s iOS team reached out with a request: "I'm looking for some advice on how to organize my study sessions." Study Session Consultation for the Mobile App Development Group This was our first inquiry. We quickly organized a meeting with the four team leaders of the iOS team and three of us from the Learning Roadside Station team. The iOS team has been holding weekly study sessions since June 2023, aiming to enhance the team’s overall skills . In the first week of each month, they decide together on what topics they want to focus on, and then they spend the second to fourth weeks working on those topics. Facilitators also take turns. They mentioned that they have also conducted various activities like casual conversation, LT (lightning talks), and reading groups, and even presented about HIG (Human Interface Guidelines). As for myself, my impression was: "Everything seems so well organized" "What else do they have to worry?" I thought. But this is a common way KINTO Technologies employees are perceived. As a result of this consultation meeting, three members of our administrative office were invited to observe their study sessions! A Peek into the Study Session Next Door Self-introduction and casual chat session So, we decided to do our own "A peek into the study session next door." The date was March 12, 2024. The iOS team gathered online and in a meeting room to start their study session. Since there were new members joining that day, the theme was a casual chat session where everyone could introduce themselves. First, everyone introduced themselves, 1 minute x 18 people = about 20 minutes in total. They shared their names, the products they were in charge of, and recent updates. Although each introduction was only one minute, we made use of Slack chat comments to react, which made it an efficient introduction time for those who were participating for the first time to get to know the members’ personalities. The Learning Roadside Station team also took the opportunity to introduce ourselves. In the second half, the casual chat began. One member mentioned that Awata-san, who visited the Muromachi office yesterday, said that deploying from Slack was reaching its limit, and he suggested creating a mobile app that could integrate without needing to sign in. Then, another member proposed, “Why not develop it in our spare time? Our Mobile App Development Group has producers a火曜nd backend developers. If you're interested, we've created a Slack channel so let's talk about it there." Wow! Then, assistant manager Hinomori-san suggested, “How about developing a Learning Roadside Station app? It would be great to create an internal app. Maybe we could integrate NFTs and KTC tokens.” Yajima-san added, "How about giving points for attending study sessions?" Hinomori-san said, "What if those who accumulate points by the end of the year get some kind of reward? It sounds fun and could be a good way to work on projects that are not yet ready to be released externally. " Nakano-san added, "It might be great for internal members to develop for internal use!" A surprising positive turn for our Learning Roadside Station!!! I'm so pleased. "There's likely more we can learn from this activity beyond just writing source code." Comments flew around during the casual chat, providing hints for growth as engineers. This study session is going great, isn't it? The chat continued, and our March study session’s excitement centered around the "try! Swift Tokyo” event which will be April’s study session topic. With their assignments in hand for the next week, the iOS engineers returned to their own paths.
アバター
はじめに こんにちは!モバイルアプリ開発Gにて my routeアプリ のAndroid側の開発を担当しておりますRomieです。 KINTOテクノロジーズ株式会社(以下KTC)では、Udemy Businessのアカウントを利用して様々な講座を受講することができます! 今回は Kotlin Coroutines and Flow for Android Development を受講しました。 Androidにおける非同期処理およびCoroutinesとFlowの理解を深めるために基本的な部分をAll Englishで解説していく講座です。 学習した感想 率直な感想は以下の通りです。 英語は非常に平坦でわかりやすい Androidの用語を除いて難しい単語がほとんどない ですので、 初学者から抜け出してしっかり非同期処理およびCoroutinesとFlowを勉強されたい方・Androidの基礎とともに英語の勉強もされたい方 には非常におすすめです! 印象に残った項目 CoroutinesとFlowは従来の非同期処理と異なりメインスレッド以外で実行され、非同期処理をより簡潔に記述することができます。 また、CoroutinesとFlowはKotlinの標準ライブラリに含まれているため、追加のライブラリを導入する必要がありません。 ここだけ取り上げても非常に大きなメリットがありますね! 全て基本的な形式になりますが、備忘録も兼ねて以下に記載します。 Callback Callbackは基本的な非同期処理ですね。onResponse/onFailureで処理を分岐させることができます。 exampleCallback1()!!.enqueue(object : Callback<Any> { override fun onFailure(call: Call<Any>, t: Throwable) { println("exampleCallback1 : Error - onFailure") } override fun onResponse() { if (response.isSuccessful) { println("exampleCallback1 : Success") } else { println("exampleCallback1 : Error - isSuccessful is false") } } }) RxJava RxJavaはsubscribeByの中でonSuccess/onErrorで処理を分岐させることができます。 exampleRxJava() .flatMap { result -> example2() } .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribeBy( onSuccess = { println("Success") }, onError = { println("Error") } ) .addTo(CompositeDisposable()) async/await async/awaitで非同期処理を行い、awaitAllで結果をまとめて処理を行います。こちらは従来の非同期処理の中でもよく使われる形式ですね。 viewModelScope.launch { try { val resultAsyncAwait = awaitAll( async { exampleAsyncAwait1() }, async { exampleAsyncAwait2() }, async { exampleAsyncAwait3() } ) println("Success") } catch (exception: Exception) { println("Error") } } viewModelScope.launch { try { val resultAsyncAwait = exampleAsyncAwait() .map { result -> async { multiExampleAsyncAwait() } }.awaitAll() println("Success") } catch (exception: Exception) { println("Error") } } withTimeout withTimeoutではタイムアウト処理を行います。withTimeoutではタイムアウト時に例外が発生します。 viewModelScope.launch { try { withTimeout(1000L) { exampleWithTimeout() } println("Success") } catch (timeoutCancellationException: TimeoutCancellationException) { println("タイムアウトによるError") } catch (exception: Exception) { println("Error") } } withTimeoutOrNull withTimeoutOrNullもタイムアウト処理ですが、withTimeoutと異なりwithTimeoutOrNullではタイムアウト時にnullを返します。 viewModelScope.launch { try { val resultWithTimeoutOrNull = withTimeoutOrNull(timeout) { exampleWithTimeoutOrNull() } if (resultWithTimeoutOrNull != null) { println("Success") } else { println("タイムアウトによるError") } } catch (exception: Exception) { println("Error") } } RoomとCoroutinesによるデータベース操作 RoomとCoroutinesを組み合わせた処理ではデータベースが空かどうかを確認し、データベースに値があればinsertします。 現在のデータベースの値を取得する処理は例外が発生する可能性があるため、try/catchで囲んでいます。 現在Androidの非同期処理ではFlowと共に非常に多くの場面で使われているのではないでしょうか。 viewModelScope.launch { val resultDatabaseRoom = databaseRoom.exac() if (resultDatabaseRoom.isEmpty()) { println("データベースは空です") } else { println("データベースに値あり") } try { val examDataList = getValue() for (resultExam in examDataList) { database.insert(resultExam) } println("Success") } catch (exception: Exception) { println("Error") } } Flow 基本的なFlowですね。onStartで初期値をemitし、onCompletionで処理が完了したことをログに出力します。 sealed class UiState { data object Loading : UiState() data class Success(val stockList: List<Stock>) : UiState() data class Error(val message: String) : UiState() } val anythingAsLiveData: LiveData<UiState> = anythingDataSource .map { anyList -> UiState.Success(anyList) as UiState } .onStart { emit(UiState.Loading) } .onCompletion { Timber.tag("Flow").d("Flow has completed.") } .asLiveData() SharedFlow/StateFlow SharedFlow/StateFlowはFlowの一種です。stateInでStateFlowに変換します。 FlowとSharedFlowの違いは、Flowがemitされた値を保持しないのに対し、SharedFlowはemitされた値を保持する点です。 StateFlowは他の2つと異なり、初期値を持ち自分自身で値の取得ができます。 SharedFlowはStateFlowと同じように値を保持しますが、複数のコレクターが値を受け取ることができます。 sealed class UiState { data object Loading : UiState() data class Success(val stockList: List<Stock>) : UiState() data class Error(val message: String) : UiState() } val anythingAsFlow: StateFlow<UiState> = anythingDataSource .map { anyList -> UiState.Success(anyList) as UiState } .onCompletion { Timber.tag("Flow").d("Flow has completed.") }.stateIn( scope = viewModelScope, initialValue = UiState.Loading, started = SharingStarted.WhileSubscribed(stopTimeoutMillis = 5000) ) まとめ 内容自体は基本的な部分が多いですが、英語での解説が多かったため1周するのに時間がかかりました。 もっと非同期処理の全体像について理解したあとに2周目を行うと理解が深まるかと思います。 2周目では英語の勉強がメインになりそうですが、、 最後までお読みいただきありがとうございました!
アバター
Introduction Hello, my name is Rina and I’m part of the development and operations team of our product Mobility Market by KINTO at KINTO Technologies. I mainly work as a frontend engineer using Next.js. Recently, I’ve been into painting Gundam models and playing Splatoon 🎮 At KINTO Technologies, we have the opportunity to purchase work-related books at the company’s expense. These books are managed by the CIO Office and are available for employees to borrow freely. So, in this post, I'd like to share how we made the management of these purchased books easier! The Previous Book Management Method The previous method of book management was to use Confluence and manually update the lending status. The management flow was as follows: The administrator adds the purchased books to the book lending list in Confluence. Those wishing to borrow a book can select it from the book lending list in Confluence and contact the administrator via Slack to request a loan. The administrator then updates the lending status in Confluence based on Slack messages When lending books this way, Those wishing to borrow or return books must contact the administrator via Slack. The administrator has to manually update the lending status each time. This was a hassle. To simplify things, we’ve completely overhauled the way we manage our books! The New Book Management Method The new method uses JIRA workflows and Kanban-style boards, allowing everyone to see the lending status without going through the administrator. Kanban-style board The management flow was as follows: The administrator registers the purchased books as tickets in the board's library. Those wishing to borrow a book can select it from the library and change the status to "Borrowing". And that's it! By registering all purchased books on the Kanban board, the administrator can instantly view the lending status at a glance—no need for manual updates. Meanwhile, anyone wishing to borrow or return books can easily update the status themselves, without the need to contact the administrator or use Slack for these tasks. A JIRA Workflow to Simplify Tasks To create this board, we have set up the following workflow: Workflow We created three statuses, "In Library," "Borrowing," and "Discarded/Lost." By automating the transition between these statuses, we’ve minimized the need for manual input. The settings for each workflow are as follows: Check out (Changing a book’s status from "In Library" to "Borrowing") Automatically inserts current date as the lending date. Automatically assigns the borrower as the assignee in the Jira ticket. Counts the number of times the book has been borrowed. Check in (Change the status from "Borrowing" to "In Library") Automatically clears the lending and expected return dates. Automatically removes the borrower as the assignee. A Little Trick to Make Things Even Easier Get an overview of book management across all offices We’ve added icons for each location, making it easy to see which books are available in each office at a glance. You can also filter and display the management status by selecting the office type. KINTO Technologies has two offices in Tokyo and one each in Nagoya and Osaka. Previously, each office managed its books separately, but now all books can be managed centrally on a single board. Receive Slack notifications when the status changes We also use JIRA's notification feature to inform the administrator about status changes via Slack. This Slack integration has made it easier to track newly purchased books and monitor who has changed the status. Improvement Results As a result of revising the book management method, we have achieved the following benefits: For administrators: No longer need to manually update the management status of books. Can quickly view the lending status and see who has borrowed the books. Can centrally manage books that were previously managed separately by each office. For borrowers: No longer need to contact via Slack when borrowing or returning books. Can simply notify the administrator by changing the Jira ticket status (no text input required!) Conclusion In this article, we shared how we simplified our book management method. By reducing some of the hassle, we hope to make both administrators and users happier✨
アバター
Self Introduction I am Morino, team leader of the CIO Office Security Team at KINTO Technologies. My hobby is supporting Omiya Ardija, the soccer team from my childhood hometown, Omiya, which is now part of Saitama City in Saitama Prefecture. In this article, I'll be introducing our vulnerability diagnostics efforts alongside Nakatsuji-san, who is passionate about heavy metal and is the main person in charge of our vulnerability diagnostics. What is vulnerability? Let's take a moment to consider: what exactly is a vulnerability? A vulnerability refers to software bugs (defects or flaws) that compromise the CIA of information security. CIA stands for the following three terms: Confidentiality Integrity Availability Confidentiality ensures that only authorized individuals have access to specific information. For example, in an app used to view payslips, confidentiality is upheld if only HR personnel and I (as authorized individuals) can access my payslip. If a software bug allows others to view it, confidentiality is compromised. Confidentiality is maintained When only authorized individuals can view the payslip. Confidentiality is compromised When unauthorized individuals can view the payslip. Integrity ensures that information remains complete, accurate, and untampered with. Using the same payslip example, integrity is maintained if only HR personnel can delete or modify the contents of my payslip. If others can delete or alter it, integrity is compromised. Integrity is maintained When only authorized individuals can delete or edit the payslip. Integrity is compromised When unauthorized individuals can delete or edit the payslip. Availability ensures that information is accessible whenever it’s needed. For example, availability is maintained if HR personnel and I can access my payslip whenever necessary. If we cannot access the payslip when needed, availability is compromised. Availability is maintained When the payslip is always accessible Availability is compromised When the payslip is not accessible About our vulnerability diagnostics efforts The goal of vulnerability diagnostics is to identify bugs that compromise the CIA of information security. At our company, we conduct the following types of vulnerability diagnostics: Web Application Diagnostics Platform Diagnostics Smartphone Application Diagnostics Web Application Diagnostics Web application diagnostics can be broadly categorized into static and dynamic diagnostics. Static diagnostics is a method that involves identifying insecure code from the source code without running the application. Dynamic diagnostics is a method that evaluates the security of a running web application. Both types of diagnostics can be performed automatically or manually. Automated diagnostics is the process where tools automatically check the source code or web application based on predefined settings. Manual diagnostics is the process where humans manually inspect the source code or web application for vulnerabilities. Static diagnostics is also known as SAST (Static Application Security Testing), and dynamic diagnostics is known as DAST (Dynamic Application Security Testing). In our web application diagnostics, our security team primarily focuses on dynamic diagnostics but I will explain both automatic and manual methods used in dynamic diagnostics. Automated diagnostics At our company, we use an automated diagnostic tool called AppScan . For example, when diagnosing whether a web application has SQL injection vulnerabilities, we input and execute attack codes designed to trigger SQL injections in the input fields. Manually checking every input field with various attack codes is time-consuming. If the web application session expires during diagnostics, we have to log in again, and some functions require a specific sequences of screen transitions, which can be tedious. Automated diagnostic tools like AppScan handle these tasks efficiently, making them incredibly useful. Manual diagnostics For manual diagnostics, we use a tool called BurpSuite .  You might wonder why we conduct manual diagnostics when we have automated tools. The security community’s, OWASP (Open Web Application Security Project), has released OWASP Top 10 , a ranking of the most critical security risks. Injection, which ranks third in the OWASP Top 10, is something automated tools are good at detecting. These tools can thoroughly input various attack codes into fields more comprehensively than a human could. So how about the top issue on the list, broken access control? You may ask This issue is similar to the example I mentioned earlier about ensuring the confidentiality of an app used to view payslips. Unfortunately, automated tools struggle with understanding the specifics of a web application’s design and determining whether its behaviors are appropriate. Diagnosing such vulnerabilities requires a manual approach. Platform Diagnostics Platform diagnostics involve evaluating network devices such as firewalls and load balancers, as well as the configurations of servers that host web applications, including vulnerabilities in server operating systems and middleware. For platform diagnostics, we use a tool called nmap . During these diagnostics, we check for the following: ・Open unnecessary ports ・Use of vulnerable software ・Configuration issues ・Protocol-specific vulnerabilities. Reference: Guidelines for Introducing Vulnerability Diagnostics in Government Information Systems P.7 Smartphone Application Diagnostics Smartphone app diagnosis typically involve two parts: diagnostics of the app itself and diagnostics of the WebAPI. For the WebAPI, we conduct vulnerability diagnostics similar to those for web applications. For the app itself, we perform static diagnostics based on OWASP’s Mobile Application Security Testing Guide (MASTG) . For future use, we are considering utilizing MobFS , which supports both dynamic and static diagnostics for app diagnosis. Recommended Books, Resources, and Websites for Learning More About Vulnerability Diagnostics If you’ve read this far, you might be interested in learning more about vulnerability diagnostics. Here are some helpful books, documents, and websites for further study. Books How to create Secure Web Applications systematically, 2nd Edition: Understanding the principles and implementing countermeasures for vulnerabilities Commonly known as the “Tokumaru book”, is considered a foundational text for those learning about vulnerability diagnostics. It's so thick that it could be used as a blunt instrument, so if you want to carry it around, I recommend purchasing the e-book version. Documents How to Create a Secure Website by IPA. As the title suggests, this document provides information on how to create a secure website. It has fewer pages than the Tokumaru mentioned above, so I recommend it for those who are new to vulnerability diagnostics. Websites WebSecurityAcademy This is a vulnerability learning site run by PortSwigger, the developer of the vulnerability diagnostic tool, BurpSuite, mentioned above. It consists of textbook material on vulnerabilities and hacking exercises. You can learn by actually completing the exercises on your browser. Conclusion In this article, we introduced the security team's efforts in vulnerability diagnostics. Recently, it has become popular to implement WebAPIs using GraphQL rather than REST APIs. As the IT world is a place where technologies come and go quickly, we will continue to strive to collect information and improve our operations on a daily basis so that we can effectively diagnose vulnerabilities in applications built with new technologies.
アバター
KINTOテクノロジーズ(以下KTC)で my route(iOS)を開発しているRyommです。 今回、KTCは2024年8月22-24日の3日間にわたって開催されるiOSDC Japan 2024のスポンサーを はじめて します! ▼ 以下のブログもおすすめ ▼ ✨ KINTOテクノロジーズはiOSDC Japan 2024のゴールドスポンサーです ✨ なんとブースも出しちゃいます✨ 技術広報グループ・クリエイティブ室・モバイルアプリ開発グループ等たくさんの人が関わって準備してきて、楽しめるブースになっているのではないかと思います。 ぜひKTCのブースに遊びにきてください!そして、KTC(=KINTOテクノロジーズ)の名前を覚えていただけるとうれしいです! 今回スポンサーをするにあたって色々こだわって制作しました。当ブログでは、たくさんの制作物をご紹介します! くもびぃ紙クリップ こちらはノベルティBOXに封入したものです! Chimrinさんの発案で、実用的でおしゃれ!ということで制作しました。 くもびぃとは、KTNTOの公式マスコットキャラクターです。 https://corp.kinto-jp.com/mascot/profile/ パンフレットのお気に入りページに挟んだり、技術書のしおりとして使ったりできます。紙のクリップですがかなり丈夫で使いやすいです! 土台の紙を開くと...トークンが出現! パンフレット掲載原稿 ノベルティBOXに封入されているパンフレットにもKTCの広告が掲載されています! トヨタのモビリティサービスを技術面で支えるKTCのムードが伝わるような仕上がりにしてみました。 シール&シール台紙セット こちらはブースにお立ち寄りいただいたら全員に配布しているノベルティです! わたくし、Ryommのアイディアが採用されました🙌 このようなイベントでは各ブースで大量にステッカーをもらいますが、みなさまはそのステッカーはどうしているでしょうか? try! Swift Tokyo 2024にて、名札にもらったステッカーをコラージュのように貼っている方を見かけて、すごく良いな!と思い、私も真似していました。 iOSDCではクリアケースにふたつ折りの紙を入れる形式でコラージュはできないなぁと思ったので、コラージュができる台紙を用意しました! ついでにiPhone風のデザインにして...サイズも15 Proくらいのサイズにして...名札のケースに入るくらいのサイズに収めています。 ぜひ名札ケースに一緒に挟んでイベントの思い出にしていただければうれしいです。 KTCが提供しているアプリのアイコン風シールも一緒に配っていますので、こちらもぜひ台紙に貼って使ってくださいませ。 マルチカードツール こちらはブース企画のクリア記念ノベルティ①です! iOSチームでアイディアソンを行い、K.Kaneさんのアイディアが採用されました。 収納時 iOSエンジニアの方はきっと、Viewの実装の際に画面に定規をあててデザイン通りになっているか確認したことがある...あるかもしれない...ないかもしれないですが...。 そんな時も名刺サイズのこれがあれば安心!いつでもどこでも長さも角度も測ることができます。 トートバッグ かわいいかわいいくもびぃがプリントされたトートバッグです。 こちらもブース企画のクリア記念ノベルティ②です。マルチカードツールと2択なので、ぜひ何度でもブースに遊びにきてください。 iOSDCではたくさんモノを貰うので、それらをまとめられるバッグがあると良いんじゃないか?というukaさんの発案です! 結構しっかりした素材のバッグでおすすめです! ブース配布リーフレット ブースではKTCの紹介のリーフレットも配布しています。 KTCが出しているプロダクトのことを知ってほしい!という思いを込めています。 ブース企画 ブース企画では「コードみいつけた!」と称して、コードの中からお題の処理を行なっている部分を探すゲームをご用意しています! KTCの各プロダクトのチームごとに問題を用意しました。時間で問題が入れ替わりますのでお見逃しなく! 問題自体にはもちろんこだわっていますが、ブースの雰囲気を統一するために細かいところにもこだわっております! 弊社のポスターを掲示している木枠を拝借してDIYシールで黒くしたり、コード2段組が見やすくなるように背景を工夫したり、問題文もブースの雰囲気と合うようにデザインしたりしています! ロールアップバナーもこの機会に作ってみました。KINTOブルーに包まれながらブース企画にチャレンジしてみてください。 さいごに この大量の制作物の要望を受けて最高にイケてるデザインをしてくれたのはクリエイティブ室のスギモトアヤさんとアワノさんです! ノベルティの制作時には手作りの試作品を持ってきてくれたり、具体的なイメージができるように工夫してコミュニケーションを取ってくれました。 おかげでブースに遊びにきてくださるみなさまを自信を持って迎える準備ができました。 さて、いよいよ8月22日から開幕です! ロームスクエアのスポンサーブースでお待ちしております!ぜひ遊びにきてください!
アバター
Hello, I am _awache ( @_awache ), from DBRE at KINTO Technologies (KTC). In this article, I’ll provide a comprehensive overview of how I implemented a safe password rotation mechanism for database users primarily registered in Aurora MySQL, the challenges I encountered, and the peripheral developments that arose during the process. To start, here's a brief summary, as this will be a lengthy blog post. Summary Background Our company has implemented a policy requiring database users to rotate their passwords at regular intervals. Solution Considered MySQL Dual Password: To set primary and secondary passwords by using Dual Password function that is available in MySQL 8.0.14 and later. AWS Secrets Manager rotation function: To enable automatic update of passwords and strengthened security by using Secrets Manager Adopted Rotation function of AWS Secrets Manager was adopted for its easy setting and comprehensiveness. Project Kickoff At the beginning of the project, we created an inception deck and clarified key boundaries regarding cost, security, and resources. What was developed in this project Lambda functions After thorough research, we developed multiple Lambda functions because the AWS-provided rotation mechanism did not fully meet KTC's requirements. Lambda function for single user strategy Purpose: To rotate passwords for a single user Settings: Managed by Secrets Manager. These functions execute at the designated rotation times in Secrets Manager to update passwords. Lambda function for alternate users rotation strategy Purpose: This function updates passwords for two users alternately to enhance availability. Settings: Managed by Secrets Manager. In the initial rotation, a second user (a clone) is created; passwords are switched in subsequent rotations. Lambda function for Secret Rotation Notifications Purpose: this function reports the results of secret rotations. Trigger: CloudTrail events for RotationStarted, RotationSucceeded, and RotationFailed Function: To store the rotation results in DynamoDB and send notifications to Slack. Additionally, it posts a follow-up message with a timestamp to the Slack thread. Lambda function for Managing DynamoDB storage of rotation results Purpose: To store rotation results in DynamoDB as evidence for submission to the security team. Function: Executes in response to CloudTrail events to save the rotation results to DynamoDB and send SLI notifications based on the stored data. Lambda function for SLI notification Purpose: To monitor the status of rotation and to send SLI notifications. Function: Retrieves information from DynamoDB to track the progress of secret rotation and sends notifications to Slack as needed. Lambda function for rotation schedule management Purpose: To determine the execution time of rotation for a DBClusterID. Function: Generates a new schedule based on the settings of existing secret rotations, saves it to DynamoDB, and sets the rotation window and duration. Lambda function for applying rotation settings Purpose: To apply the scheduled rotation settings to Secrets Manager Function: Configures secret rotation at the designated times using information from DynamoDB. A Tool for Registering Secret Rotations We developed an additional tool to facilitate local registration of secret rotations. Tool for setting Secrets Rotation schedule Purpose: To set secret rotation schedules per database user. Function: Applies the secret rotation settings based on data saved in DynamoDB for the specified DBClusterID and DBUser. Final Architecture Overview We initially believed it could be done much simpler, but it turned out to be more complex than expected... ![The whole image](/assets/blog/authors/_awache/20240812/secrets_rotation_overview_en.png =750x) Results Automated the entire secret rotation process, reducing security and management efforts. Developed a comprehensive architecture that meets governance requirements. Leveraged secret rotation to enhance database safety and efficiency, with ongoing improvement efforts. Now, let's explore the main story. Introduction KTC has implemented a policy requiring database users to rotate their passwords at regular intervals . However, rotating passwords is not a straightforward process. To change a database user's password, the system must first be stopped. Then, the password in the database is updated, system settings files are adjusted, and finally, system operations must be verified. In other words, we need to perform a maintenance operation that provides no direct value by stopping the system just to change a database user's password. It would be highly inconvenient to perform this for every service at extremely short intervals. This article explains how we addressed this challenge through specific examples. Solution Considerations We considered two major solutions. To use functions of MySQL Dual Password To make use of the rotation function of Secrets Manager MySQL Dual Password The Dual Password function is available in MySQL starting from version 8.0.14. Using this function allows us to set both a primary and a secondary password, enabling password changes without stopping the system or applications. Simple steps to use Dual Password function are as follows: Set a new primary password. You can use the command ALTER USER 'user'@'host' IDENTIFIED BY 'new_password' RETAIN CURRENT PASSWORD; while keeping the current password as the secondary one. Update all applications to be connected with the new password. Delete the secondary password by ALTER USER 'user'@'host' DISCARD OLD PASSWORD; . Rotation function of Secrets Manager AWS Secrets Manger supports periodical automatic update of secrets. Activating secret rotation not only reduces efforts to manage passwords manually but also helps to enhance security. To activate it, one only needs to configure the rotation policy in Secrets Manager and assign a Lambda function to handle the rotation. ![Rotation setting screen](/assets/blog/authors/_awache/20240812/rotation_setting_en.png =750x) Lambda rotation function Creating the rotation function By automatically deploying the code provided by AWS, we can use it immediately without the need to create custom Lambda functions. Using rotation function from Account You can either create a custom Lambda function or select the one created earlier under 'Creating the Rotation Function' if you wish to reuse it. Rotation strategy Single user Method to rotate passwords for a single user. The database connection is maintained, allowing authentication information to be updated and reducing the risk of access denial with an appropriate retry strategy. After rotation, new connections require the updated authentication information (password). Alternate user Initially, I found it challenging to grasp the alternate user strategy, even after reading the manual. However, after careful consideration, I’ve articulated it as follows: This method alternates password updates by rotation, where the authentication information (a combination of username and password) is updated in a secret. After creating a second user (a clone) during the initial rotation, the passwords are switched in subsequent rotations. This approach is ideal for applications that require high database availability, as it ensures that valid authentication information is available even during rotations. The clone user has the same access rights as the original user. It's important to synchronize the permissions of both users when updating their access rights. Below is an image illustrating the concept explained above. Changes before and after rotation ![Before/after rotation](/assets/blog/authors/_awache/20240812/rotation_exec_en.png =750x) Though it may be a bit difficult to see, the username will have '_clone' appended during password rotation. In the first rotation, a new user with the same privileges as the existing user is created on the database side. The password will continue to be updated by reusing it in subsequent rotations after the second one. ![Alternate user](/assets/blog/authors/_awache/20240812/multi_user_rotation_en.png =750x) The Solution Adopted We decided to use rotation function by Secrets Manager for the following reasons: Easy to set up MySQL Dual Password The updated password must be applied to the application after preparing a script for the password change. Rotation function of Secrets Manager The product side does not need to modify code as long as the service consistently retrieves connection information from Secrets Manager. Comprehensiveness MySQL Dual Password Supported only in MySQL 8.0.14 and later (Aurora 3.0 or later) Secrets Manager Rotation Function Supports all RDBMS used by KTC Amazon Aurora Redshift Providing additional support beyond database passwords Can also manage API keys and other credentials used in the product. Toward the Project Kickoff Before starting the project, we first clarified our boundaries for cost, security, and resources to determine what should and shouldn’t be done. We also created an inception deck. The following is outline of what was discussed: Breakdown of responsibilities Topic Product team DBRE team Cost - Responsible for the cost of Secrets Manager for storing database passwords - Responsible for the cost associated with the secret rotation mechanism. Security - Products using this mechanism must always retrieve database connection information from Secrets Manager. - After a rotation, connection information must be updated by redeploying the application and other components until the next rotation occurs. - Ensuring that rotations are completed within the company's defined governance limits. - Providing records of secret rotations to the security team as required. - Passwords must not be stored in plain text to maintain traceability. - Sufficient security must be maintained in the mechanism used for rotation. Resources - Ensuring that all database users are managed by Secrets Manager. - Ensuring that the implementation of secret rotation resources is executed with the minimum necessary configuration. What needed to be done Execute secret rotation within the company’s defined governance limits. Detect and notify the start, completion, success, or failure of a secret rotation to the relevant product teams. Ensure recovery from a failed secret rotation without affecting the product. Align rotation timing with the schedule set by users registered in the same DB Cluster. Monitor compliance with the company’s governance standards. Inception deck (an excerpt) Why are we here To develop and implement a system that complies with the company’s security policy and automatically rotates database passwords at regular intervals. To strengthen security, reduce management efforts, and ensure compliance through automation. Led by the DBRE team, to achieve safer and more efficient password management by leveraging AWS's rotation strategy. Elevator pitch Our goal is to reduce the risk of security breaches and ensure compliance. We offer a service called Secret Rotation, designed for product teams and the security group, to manage database passwords. It has functions to strengthen automatic security and reduce effort to manage, Unlike MySQL’s Dual Password feature, It is compatible with all AWS RDBMS option Through AWS services, we utilize the latest cloud technologies to provide flexible and scalable security measures that meet enterprise data protection standards. Proof of Concept (PoC) To execute the PoC we prepared the necessary resources in our testing environment, such as a DB Cluster for our own verification. We discovered that implementing the rotation mechanism through the console was straightforward, leading us to anticipate a rapid deployment of the service with high expectations. However, at that time, I had no way of knowing that trouble was just around the corner... Architecture Providing secret rotation alone is not enough without a notification mechanism for users. I’ll introduce an architecture that includes this essential feature. Secret Rotation Overview ![The whole architecture](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture_en.png =750x) Secret rotation will be managed by secrets registered in Secrets Manager. Here’s an example of a monthly update for clarity. In this case, the same password can be used for up to 2 months due to the monthly rotation schedule. During this period, you will achieve compliance with the company's rotation rules with minimal effort, aligning with any necessary deployment timing for product releases. Rotation Results to be stored at DynamoDB In Secret Rotation, a status will be written to CloudTrail as an event by the following timing: Process start; RotationStarted Process failure; RotationFailed Process end; RotationSucceeded See log entries for rotation as there are additional details available. We configured a CloudWatch Event so that the above events would serve to trigger the Lambda function for notification. Below are some of the Terraform code snippets used: cloudwatch_event_name = "${var.environment}-${var.sid}-cloudwatch-event" cloudwatch_event_description = "Secrets Manager Secrets Rotation. (For ${var.environment})" event_pattern = jsonencode({ "source" : ["aws.secretsmanager"], "$or" : [{ "detail-type" : ["AWS API Call via CloudTrail"] }, { "detail-type" : ["AWS Service Event via CloudTrail"] }], "detail" : { "eventSource" : ["secretsmanager.amazonaws.com"], "eventName" : [ "RotationStarted", "RotationFailed", "RotationSucceeded", "TestRotationStarted", "TestRotationSucceeded", "TestRotationFailed" ] } }) Stored rotation results can be used as evidence for submission to the security team. The architecture reflecting the components discussed so far is as follows: ![Architecture only for Secret Rotation](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture2_en.png =750x) AWS resources needed for providing functions Lambda function for applying alternate user strategy (Different Lambda functions are required for MySQL and Redshift.) Lambda function for alternate user to be set at Secrets Manager We developed this in-house to meet company rules for infrastructure compliance. We encountered several elements that automatically-generated Lambda functions could not address, such as Lambda function settings and IAM configurations. Lambda function to apply single user strategy (Different Lambda is needed for MySQL and Redshift respectively) Lambda function for single user to be set at Secrets Manager A password for administrator user cannot be applied with alternate user strategy Lambda function for Secret Rotation Notifications A mechanism to notify that it has been rotated by Secret Rotation must be prepared by ourselves. As CloudTrail is stored with the status and results, we can use them as a trigger to notify to Slack. Be careful that Lambda will be executed individually when executed by an event trigger. DynamoDB for storing rotation results Results of rotation to be stored in DynamoDB Additionally, the timestamp is stored in the Slack thread to clarify which notification it is related to. Why we chose to manage the Lambda function for secret rotation ourselves As a prerequisite, we use AWS-provided Lambda. Since AWS provides the ability to automatically deploy code, we can use it immediately without the need to create individual Lambda functions. However, we deploy it using Terraform after committing the code set to our repository. Main reasons for this are as follows: Multiple services exist within KTC's AWS account. When several services exist in the same AWS account, IAM’s privilege becomes too strong Also, services are provided across regions As Lambda cannot be executed in cross-region, the same code must be deployed to regions by using Terraform. We have a large number of database users that require Secret Rotation settings. Number of database clusters Below 200; Number of database users Below 1000 The workload would be overwhelming if we manually built the system for each secret. Applying Company Rules It calls for setting of Tag in addition to IAM Automatic and individual creation will require setting up of Tag subsequently AWS-provided code will be updated periodically. Since the codes are provided by AWS, this inevitably happens. There is a possibility that this will lead to a trouble by chance I have written several matters so far, but in a nutshell, it was more convenient for us to manage the codes in consideration of the in-company rules. How we managed Lambda functions for Secrets Rotation This was really a hard job. At the beginning, we thought it would go easily as AWS provided samples of Lambda codes . But we saw many kinds of errors after deploying based on them. While we had some success during our own verification, we faced significant challenges when errors occurred in specific database clusters. However, we discovered that the automatically generated code from the console was error-free and remained stable, highlighting the need to use it effectively. There are several approaches, but let me share the one we tried. Exploring how to deploy from a sample code We can see the code itself from the above mentioned link However, it is hard to match all the necessary modules including version. Besides, this Lambda code is frequency updated and we have to follow up. We gave up this approach as it was a hard job. Then, we realized it would be better off if make it inhouse with other method as long as we need to control this code. Download the Lambda code after automatically generating the Secret Rotation function from the console. This method is to generate code automatically every time, download it to local to apply it to our Lambda. It is not too difficult to do. However, there is a chance that existing and working code may change from a downloaded code by timing of automatic code generation. This approach would have worked, but we found it burdensome to deploy every time the code needed updates. Verify how it was deployed from the CloudFormation template used behind the scenes when the Secret Rotation function is automatically generated from the console. When automatically generated from the console, AWS CloudFormation operates in the background. By examining the template at this stage, we can obtain the S3 path of the code automatically generated by AWS. We adopted the third method above as it was the most efficient way to directly obtain the Zip file from S3, eliminating the need to generate Secret Rotation code each time. The actual script to download from S3 are as follows: #!/bin/bash set -eu -o pipefail # Navigate to the script directory cd "$(dirname "$0")" source secrets_rotation.conf # Function to download and extract the Lambda function from S3 download_and_extract_lambda_function() { local s3_path="$1" local target_dir="../lambda-code/$2" local dist_dir="${target_dir}/dist" echo "Downloading ${s3_path} to ${target_dir}/lambda_function.zip..." # Remove existing lambda_function.zip and dist directory rm -f "${target_dir}/lambda_function.zip" rm -rf "${dist_dir}" if ! aws s3 cp "${s3_path}" "${target_dir}/lambda_function.zip"; then echo "Error: Failed to download file from S3." exit 1 fi echo "Download complete." echo "Extracting lambda_function.zip to ${dist_dir}..." mkdir -p "${dist_dir}" unzip -o "${target_dir}/lambda_function.zip" -d "${dist_dir}" cp -p "${target_dir}/lambda_function.zip" "${dist_dir}/lambda_function.zip" echo "Extraction complete." } # Create directories if they don't exist mkdir -p ../lambda-code/mysql-single-user mkdir -p ../lambda-code/mysql-multi-user mkdir -p ../lambda-code/redshift-single-user mkdir -p ../lambda-code/redshift-multi-user # Download and extract Lambda functions download_and_extract_lambda_function "${MYSQL_SINGLE_USER_S3_PATH}" "mysql-single-user" download_and_extract_lambda_function "${MYSQL_MULTI_USER_S3_PATH}" "mysql-multi-user" download_and_extract_lambda_function "${REDSHIFT_SINGLE_USER_S3_PATH}" "redshift-single-user" download_and_extract_lambda_function "${REDSHIFT_MULTI_USER_S3_PATH}" "redshift-multi-user" echo "Build complete." By running this script at the time of deployment, the code can be updated. Conversely, the conventional code can be used continuously unless running this script. Lambda function and Dynamo DB to notify results of Secret Rotation A notification of Secret Rotation results is executed with PUT of CloudTrail as a trigger. We considered modifying the Lambda function for Secret Rotation to simplify things. However, this would have complicated explaining our effort to fully utilize the code provided by AWS. Before starting development, I initially thought all we needed was to use a PUT trigger for notifications. But, things were not that easy. Let’s see the whole picture again. ![The whole architecture](/assets/blog/authors/_awache/20240812/secrets_rotation_archtecture_en.png =750x) Its notification process involves creating a Slack notification thread at the start and adding a postscript to the thread when the notification is completed. ![Slack Notification](/assets/blog/authors/_awache/20240812/slack_notification.png =750x) Events we use this time are as follows: Event at the start of the processing Event of PUT to Cloud Trail RotationStarted Event at the end of the processing Event of PUT to Cloud Trail when the processing succeeds RotationSucceeded Event of PUT to Cloud Trail when the processing fails RotationSucceeded On the occasion of RotationStarted, an event at the start of the processing, its Slack time stamp can be stored in DynamoDB and we can add postscripts on the thread by using it. Considering these, we had to examine by which unit DynamoDB would become unique. Consequently, we chose to combine SecretID of Secrets Manager and scheduled date of the next rotation to make it unique. Main structure of columns of DynamoDB is as follows: (In actual, more information is being stored in them) SecretID: Partition key NextRotationDate: Sort key Schedule of the next rotation; Obtainable with describe SlackTS: Time stamp sent first by Slack at the event of RotationStarted Using this time stamp, we can add postscript on the Slack thread. VersionID: Version of SecretID at the event of RotationStarted By keeping the last version to reverse to the previous state at once if a trouble happens, it is possible to restore the password information before the rotation The biggest challenge we faced was that multiple Lambda functions were triggered in steps due to several PUT events being activated during a single Secret Rotation process. Even though i understood this in theory, it proved to be extremely troublesome. We had to pay attention to the following consequently: Processing of Secret Rotation itself is a very high-speed one. Since the timing of PUT to Cloud Trail is almost identical for RotationStarted and RotationSucceeded (or RotationFailed), the execution of Lambda for notification will take place twice, almost simultaneously. But Lambda for notification also handles Slack notification and DynamoDB registration, an event at the processing end may run before the RotationStarted process completes. When this happens, a new script will be added to Slack without knowing the destination thread. To solve this, we chose a simpler approach where processing to notify Slack should be halted for a couple of seconds in case of the name of event is other than RotationStarted. Secret Rotation may fail due to an error of setting and such. In most cases, a product will not be affected by this at once as it becomes an error before DB password updating. In such a case, a recovery can be executed with the following command. # VersionIdsToStages obtains the version ID of AWSPENDING $ aws secretsmanager describe-secret --secret-id ${secret_id} --region ${region} - - - - - - - - - - Output sample of Versions - - - - - - - - - - "Versions": [ { "VersionId": "7c9c0193-33c8-3bae-9vko-4129589p114bb", "VersionStages": [ "AWSCURRENT" ], "LastAccessedDate": "2022-08-30T09:00:00+09:00", "CreatedDate": "2022-08-30T12:53:12.893000+09:00", "KmsKeyIds": [ "DefaultEncryptionKey" ] }, { "VersionId": "cb804c1c-6d1r-4ii3-o48b-17f638469318", "VersionStages": [ "AWSPENDING" ], "LastAccessedDate": "2022-08-30T09:00:00+09:00", "CreatedDate": "2022-08-30T12:53:22.616000+09:00", "KmsKeyIds": [ "DefaultEncryptionKey" ] } ], - - - - - - - - - - - - - - - - - - - - - - - - # Delete the subject version $ aws secretsmanager update-secret-version-stage --secret-id ${secret_id} --remove-from-version-id ${version_id} --version-stage AWSPENDING --region ${region} # From the console, to make the subject secret “rotate at once” Although this has not occurred, if the database password is changed due to an issue, we execute the following command to retrieve the previous password. Since we also use alternate user rotation, it doesn't immediately disable product access to the database. We believe it won't be an issue until the next rotation is executed. $ aws secretsmanager get-secret-value --secret-id ${secret_id} --version-id ${version_id} --region ${region} --query 'SecretString' --output text | jq . For # user and password, we will set a parameter obtained by aws secretsmanager get-secret-value $ mysql --defaults-extra-file=/tmp/.$DB username for administration}.cnf -e "ALTER USER ${user} IDENTIFIED BY '${password}' # Check connection $ mysql --defaults-extra-file=/tmp/.user.cnf -e "STATUS" As for the things to do up to here, we were able prepare a foundation to achieve the following: Detect and notify the start, completion, success, or failure of a secret rotation to the relevant product teams. Ensure recovery from a failed secret rotation without affecting the product. Our battle did not stop here Although we could prepare the major functions as described, we identified three additional tasks that we needed to address. Execute secret rotation within the company’s defined governance limits. Align rotation timing with the schedule set by users registered in the same DB Cluster. Monitor compliance with the company’s governance standards. In order to achieve them, we had to develop peripheral functions. To build a mechanism to monitor the degree of compliance has been observed for the standard of the governance constrains defined by the company What we should do in this is, in a nutshell, to obtain lists of all users existing in every DB Cluster, and to check if dates of password updating for every user should be within a duration required by corporate governance. We can obtain the latest password updating date of every user after logging in each DB Cluster and executing the following query. mysql> SELECT User, password_last_changed FROM mysql.user; +----------------+-----------------------+ | User | password_last_changed | +----------------+-----------------------+ | rot_test | 2024-06-12 07:08:40 | | rot_test_clone | 2024-07-10 07:09:10 | : : : : : : : : +----------------+-----------------------+ 10 rows in set (0.00 sec) This should be executed in every DB Cluster. However, we have already obtained metadata of all DB Clusters every day and automatically generated Entity Relationship Diagram and my.cnf, and executed a scrip to check if there is any inappropriate settings in database. We could solve this simply by adding a processing to obtain lists of users and the latest password updating dates to save them in DynamoDB. Main structure of columns of DynamoDB is as follows: DBClusterID: Partition key DBUserName: Sort key PasswordLastChanged: Latest password updating date In practice, Users automatically generated for the use of RDS but we cannot not control Users with the name of “_clone” generated by Secret Rotation function The above users should be excluded. For this reason, we obtain the really necessary data by the following query. SELECT CONCAT_WS(',', IF(RIGHT(User, 6) = '_clone', LEFT(User, LENGTH(User) - 6), User), Host, password_last_changed) FROM mysql.user WHERE User NOT IN ('AWS_COMPREHEND_ACCESS', 'AWS_LAMBDA_ACCESS', 'AWS_LOAD_S3_ACCESS', 'AWS_SAGEMAKER_ACCESS', 'AWS_SELECT_S3_ACCESS', 'AWS_BEDROCK_ACCESS', 'rds_superuser_role', 'mysql.infoschema', 'mysql.session', 'mysql.sys', 'rdsadmin', ''); In addition, we prepared a Lambda for SLI to gather information of DynamoDB. Consequently, the output is like this: ![SLI notification](/assets/blog/authors/_awache/20240812/sli.png =750x) Its output content is as follows: Total Items: The number of all users existing in all DB Clusters Secrets Exist Ratio: Ratio of SecretIDs that comply with the naming rule for Secrets Manager used in KINTO Technologies Rotation Enabled Ratio: Ratio of activated Secret Rotation functions Password Change Due Ratio: Ratio of users who comply with the corporate governance rule The important thing is to make Password Change Due Ratio 100%, There is no need to depend on Secret Rotation function as long as this ratio is 100%. With this SLI notification mechanism, we can achieve the following: Monitor compliance with the company’s governance standards. A mechanism to synchronize rotation timing with the schedule set by users registered in the same DB Cluster. We had to write two code sets to realize this mechanism. A mechanism to decide the execution time of rotation for a DBClusterID. A mechanism to set a rotation on Secrets Manager by the time determined by the above Each of these is described below. The mechanism to decide the execution time of rotation for a DBClusterID. On the assumption, execution time of Secret Rotation can be described by a schedule called rotation window . Description and the usage of rotation window can be summarized into two as follows: rate equation This is used when we want to set a rotation interval by a designated number of days cron equation This is used when we want to set a rotation interval in detail such as specific day of the week or time. We decided to use cron equation as we wanted to execute our setting in daytime of weekdays. Another point to set is “window duration” of a rotation. By combining these two, we can control the execution timing of a rotation to some extent. The relation between rotation window and window duration is as follows: Rotation window means the time when a rotation ends, not starts Window duration determines allowance for execution against the set up time by the rotation window Window duration’s default is 24 hours That means, if the rotation window is set at 10:00AM of the fourth Tuesday every month but the widow duration is not specified (24 hours), the timing for Secret Rotation will be executed sometime between 10:00AM of the fourth Monday and 10:00AM of the fourth Tuesday every month, as a case. This is hard to follow intuitively. But, if we don’t get this relationship, Secret Rotation may be executed at unexpected timing. With those assumption in mind, we determined the requirement as follows: Rotation for DB users by DBClusterID will be executed at the same timezone Window duration is for three hours Setting by too short timing may lead to see problems occurring simultaneously during a timezone from a trouble reporting to its recovery Timing of the execution is set at between 09:00 to 18:00 of weekdays Tuesdays to Fridays We don’t execute on Mondays as it is more likely that a public holiday falls on that day. As the window duration is going to be fixed as three hours, what can be set in cron equation is six hours between 12:00-18:00 Only UTC can be set in cron equation Timings of execution should be dispersed as much as possible This is because many Secret Rotations run at the same timing, restrictions of various API may be affected. And if an error of some kind may occur, many alerts will be activated and we cannot respond to them at the same time The whole flow of Lambda processing will be as follows: Data acquisition : Acquire a DBClusterID list from DynamoDB Acquire setting information of existing Secret Rotation from DynamoDB Generation of schedule Initialize all combination (slots) of week, day and hour Check if the subject DBClusterID does not exist in the setting information of existing Secret Rotation If it exists, embed DBClusterID in the same slot of setting information of existing Secret Rotation Distribute new DBClusterID to slots evenly Add new data to empty slot and if it is not empty, add data to the next slot Execute repeatedly until the last one of DBClusterID list Storing data : Data is stored after filtering setting information of the new Secret Rotation that does not duplicate with the existing data. Error handing and notification : When a serious error occurs, an error message is sent to Slack for notification. Then, DynamoDB’s column to be stored is as follows: DBClusterID: Partition key CronExpression: cron equation to set at Secret Rotation It’s a bit hard to follow, but we make a state as follows, as an image: ![Slot putting in image](/assets/blog/authors/_awache/20240812/decide_en.png =750x) A mechanism to decide the execution time of rotation for a DBClusterID up to here. However, this doesn’t work to set up the actual Secret Rotation. Then, we need a real mechanism to set up Secret Rotation. The mechanism to set a rotation on Secrets Manager by the time determined by the above We don’t believe that a mechanism of Secret Rotation is the only means to keep the corporate governance. More important thing is to see compliance with the governance standard defined by the company Accordingly, instead of enforcing to use this mechanism, we need a mechanism that make our users want to use it as the safest and simplest one conceived by DBRE. Perhaps, we may find such requests from the users in DBCluster, like one user wishes to use Secret Rotation, while the other use insists to manage by themselves with different method. To satisfy such requests, we will need a command line tool for setting of Secret Rotation in the unit of database user linked to DBClusterID required. We have been developing a tool called dbre-toolkit for converting our daily work to command lines as DBRE. This is a package of tools such as the one to execute Point In Time Restore easily, the one to acquire DB connecting users in Secrets Manager to create defaults-extra-file . This time, we added a subcommand here: % dbre-toolkit secrets-rotation -h 2024/08/01 20:51:12 dbre-toolkit version: 0.0.1 It is a command to set Secrets Rotation based on Secrets Rotation schedule linked to a designated Aurora Cluster. Usage: dbre-toolkit secrets-rotation [flags] Flags: -d, --DBClusterId string [Required] DBClusterId of the subject service -u, --DBUser string [Required] a subject DBUser -h, --help help for secrets-rotation It was intended to complete a setting of Secret Rotation by registering the information to Secrets Manager after acquiring a combination of DBClusterID and DBUser as designated from DynamoDB. We could achieve the following with this: Execute secret rotation within the company’s defined governance limits. Align rotation timing with the schedule set by users registered in the same DB Cluster. We completed what we had decided finally by doing all these. Conclusion Here’s what we have achieved: We developed a mechanism to detect and notify relevant product teams about the start, completion, success, or failure of a secret rotation. This involved creating a system to detect CloudTrail PUT events and notifying appropriately. We ensured recovery from failed secret rotations without affecting the product. We prepared steps to handle potential issues. We found that understanding how Secret Rotation works helps minimize the risk of fatal errors. We executed secret rotations within the company’s defined governance limits. Through developing a mechanism for SLI notification. We implemented a mechanism to perform secret rotation within the company’s defined governance limits. We synchronized rotation timing with schedules set by users registered in the same DB Cluster. We developed a mechanism to store cron expressions to DynamoDB as an equation for setting to Secret Rotation in the unit of DBClusterID. We enhanced compliance monitoring according to the company’s governance. Through developing a mechanism for SLI notification. The whole image became like this: ![The whole image](/assets/blog/authors/_awache/20240812/secrets_rotation_overview_en.png =750x) The overall architecture turned out more complex than we initially imagined. In other words, we expected Secret Rotation management to be simpler. The function of Secret Rotation provided by AWS is very effective if you simply use it. However, we discovered that we needed to prepare many elements in-house because the out-of-the-box solution did not fully meet our requirements. We went through numerous trials and errors to reach this point. In the future, we aim to create a corporate environment where everyone can seamlessly use the KTC database with the Secret Rotation mechanism we've developed. Our goal is to ensure the database remains safe and continuously available. KINTO Technologies’ DBRE team is currently recruiting new team mates! We welcome casual interviews as well. If you're interested, please feel free to contact us via DM on X . In addition, we wish you to follow our corporate exclusive X account for recruitment !
アバター
こんにちは!KINTOテクノロジーズで生成AIエバンジェリストをしている和田( @cognac_n )です。 皆さんはプロンプトの管理をどのように行なっていますか?今回はプロンプトの 作成/編集 、 動作確認 、 実装 、 管理 が簡単にできるPromptyを紹介します! 1. Promptyとは? Promptyは、大規模言語モデル(LLM)を使用する際のプロンプトを効率的に開発するためのツールです。YAML形式でプロンプトとパラメータを一元管理でき、GitHubなどバージョン管理ツールでのプロンプト管理や、チームでの開発にもピッタリです。Visual Studio Code (以下 VS Code)の拡張機能を使用することで、プロンプトエンジニアリングの作業効率を大幅に向上させることができます。 Prompty導入のメリット Azure AI Studioとの連携やPrompt Flowとの連携も魅力的ですが、今回はVS Codeとの連携を中心に紹介していきます。 Promptyはこんな人におすすめ! プロンプト開発を高速化したい プロンプトのバージョン管理が必要 チームでプロンプト開発をしている プロンプトを実行するアプリ側の記述をシンプルにしたい https://github.com/microsoft/prompty 2. 前提条件 必要な前提条件(執筆時点) Python 3.9以上 VS Code(拡張機能を使用する場合) OpenAI APIキーまたはAzure OpenAI Endpoint(使用するLLMに応じて) インストール手順と初期設定 VS Codeの拡張機能をインストールしましょう https://marketplace.visualstudio.com/items?itemName=ms-toolsai.prompty pipなどを用いてライブラリをインストールしましょう pip install prompty https://pypi.org/project/prompty/ 3. 実際に使ってみる 3-1. Promptyファイルを新規作成 エクスプローラータブで右クリック→「New Prompty」を選択することで、雛形が作成されます。 ![New Prompty](/assets/blog/authors/s.wada/20240821/image_2.png =350x) New Prompty 作成される雛形は以下の内容です。 --- name: ExamplePrompt description: A prompt that uses context to ground an incoming question authors: - Seth Juarez model: api: chat configuration: type: azure_openai azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT} azure_deployment: <your-deployment> parameters: max_tokens: 3000 sample: firstName: Seth context: > The Alpine Explorer Tent boasts a detachable divider for privacy, numerous mesh windows and adjustable vents for ventilation, and a waterproof design. It even has a built-in gear loft for storing your outdoor essentials. In short, it's a blend of privacy, comfort, and convenience, making it your second home in the heart of nature! question: What can you tell me about your tents? --- system: You are an AI assistant who helps people find information. As the assistant, you answer questions briefly, succinctly, and in a personable manner using markdown and even add some personal flair with appropriate emojis. # Customer You are helping {{firstName}} to find answers to their questions. Use their name to address them in your responses. # Context Use the following context to provide a more personalized response to {{firstName}}: {{context}} user: {{question}} --- で挟まれた領域に、パラメータを記述します。その下に、プロンプト本体を記述します。 system: や user: を用いることでroleの定義を行うことができます。 基本的なパラメータ紹介 パラメータ 説明 name プロンプトの名称を記述します description プロンプトの説明を記述します authors プロンプト作成者の情報を記述します model プロンプトで使用する生成AIモデルの情報を記述します sample プロンプトに {{context}} などのプレースホルダーがある場合、ここに記述した内容が動作確認時に代入されます 3-2. APIキーやパラメータなどの設定 APIの実行に必要なAPIキーやエンドポイントの情報、実行時のパラメータ設定を行う方法がいくつかあります。 【パターン1】 .promptyファイルへの記述 .promptyファイルにそのまま記述する方法です。 model: api: chat configuration: type: azure_openai azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT} azure_deployment: <your-deployment> parameters: max_tokens: 3000 ${env:AZURE_OPENAI_ENDPOINT} のように環境変数を参照させることもできます。ただし、 azure_openai_api_key をこの方法で設定することはできません。 ![azure_openai_api_keyは.promptyファイルに記述できない](/assets/blog/authors/s.wada/20240821/image_3.png =750x) azure_openai_api_keyは.promptyファイルに記述できない 【パターン2】 settings.jsonを使った設定 VS Codeの settings.json を用いる方法です。設定が不足している状態で画面右上の再生ボタンをクリックすると、settings.jsonの編集に誘導されます。defaultの定義以外にconfigを複数作成でき、これらを切り替えながら動作確認をすることが可能です。 type を azure_openai としているときに api_key を空に設定して実行すると、後述するAzure Entra IDでの認証に誘導されます。 { "prompty.modelConfigurations": [ { "name": "default", "type": "azure_openai", "api_version": "2023-12-01-preview", "azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}", "azure_deployment": "", "api_key": "${env:AZURE_OPENAI_API_KEY}" }, { "name": "gpt-3.5-turbo", "type": "openai", "api_key": "${env:OPENAI_API_KEY}", "organization": "${env:OPENAI_ORG_ID}", "base_url": "${env:OPENAI_BASE_URL}" } ] } 【パターン3】 .envを使った設定 .env ファイルを作成しておくことで、そこから環境変数を読み取ってくれます。 .env ファイルは使用する .prompty ファイルと同じ階層に配置する必要があるので注意です。手元で試す際にはとてもお手軽な設定方法です。 AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY AZURE_OPENAI_ENDPOINT=YOUR_AZURE_OPENAI_ENDPOINT AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION 【パターン4】 Azure Entra IDを用いた設定 適切な権限が割り当てられたAzure Entra IDでログインすることで、APIの利用が可能です。 私はまだ試せていません 3-3. VS Codeでのプロンプト実行 右上の再生ボタンから簡単にプロンプトを実行することができます。結果は 出力(OUTPUT) に表示されます。 出力 内のドロップダウンから「Prompty Output(Verbose)」を選択すると、結果の生データを確認できます。プレースホルダーへの代入状況や token usage など細かな情報を確認したい場合に便利です。 右上の再生ボタンでプロンプトが実行できます 結果は出力(OUTPUT)で確認できます 3-4. その他のパラメータ紹介 以下のページで様々なパラメータが紹介されています。 https://prompty.ai/docs/prompty-file-spec inputs や、jsonモードを使用する際の outputs などはプロンプトの可視性が上がるので是非定義しましょう。 inputs: firstName: type: str description: The first name of the person asking the question. context: type: str description: The context or description of the item or topic being discussed. question: type: str description: The specific question being asked. 3-5. アプリケーションへの組み込み アプリケーションで使用しているライブラリによって記法が異なります。Prompty自体のバージョンアップも盛んなため、常に最新のドキュメントを確認するようにしましょう。参考までに、 Prompt Flow との組み合わせで使用したコードを記載します。シンプルな記述でプロンプトの実行が可能です。 from promptflow.core import Prompty, AzureOpenAIModelConfiguration # AzureOpenAIModelConfiguration を使って Prompty をロードするための設定を行う configuration = AzureOpenAIModelConfiguration( azure_deployment="gpt-4o", # Azure OpenAI のデプロイメント名を指定 api_key="${env:AZURE_OPENAI_API_KEY}", # APIキーを環境変数から取得 api_version="${env:AZURE_OPENAI_API_VERSION}", # APIバージョンを環境変数から取得 azure_endpoint="${env:AZURE_OPENAI_ENDPOINT}", # Azureエンドポイントを環境変数から取得 ) # モデルのパラメータを上書きするための設定を行う # サンプルとしてmax_tokensを上書きしています override_model = {"configuration": configuration, "max_tokens": 2048} # 上書きされたモデル設定を使ってPromptyをロード prompty = Prompty.load( source="to_your_prompty_file_path", # 使用するPromptyファイルを指定 model=override_model # 上書きされたモデル設定を適用 ) # promptyを実行 result = prompty( firstName=first_name, context=context, question=question ) # 与えられたテキストを元にPromptyを実行し結果を取得 4. まとめ Promptyは、プロンプトエンジニアリングの作業を大幅に効率化できる強力なツールでした!特に、VS Codeと連携した開発環境は、プロンプトの 作成 、 動作確認 、 実装 、 管理 までをシームレスに行えるため、非常に使いやすいです。Promptyを使いこなすことで、プロンプトエンジニアリングの効率と品質を大幅に向上させることができると思います。ぜひ、皆さんも試してみてください! Prompty導入のメリット(再掲) We Are Hiring! KINTOテクノロジーズでは、事業における生成AIの活用を推進する仲間を探しています。まずは気軽にカジュアル面談からの対応も可能です。少しでも興味のある方は以下のリンク、または XのDM などからご連絡ください。お待ちしております!! https://hrmos.co/pages/kinto-technologies/jobs/1955878275904303115 弊社における生成AIの取り組みについてはこちらで紹介しています。 https://blog.kinto-technologies.com/posts/2024-01-26-GenerativeAIDevelopProject/ ここまでお読みいただき、ありがとうございました!
アバター
Introduction Hello, I am nam. I joined the company in November! I interviewed those who joined the company in February and March 2024 immediately after joining, and have summarized their impressions in this article.. I hope this content will be useful for those who are interested in KINTO Technologies, and serve as a reflection for the members who participated in the interview. J.O ![alt text](/assets/blog/authors/nam/newcomers/icon-jo.jpg =250x) Introduction I am J.O., and I joined the company in March. I am a producer at KINTO ONE Development Division New Vehicle Subscription Development Group. In my previous job, I was involved in the planning and operation of web/apps for our toC services at an operating company, and summarizing the definition of development requirements on the business side. How is your team structured? The New Vehicle Subscription G involves various teams, including back-end, front-end, content development, and tool development. With over 40 members, including our service providers, it is one of the largest departments in the company What was your first impression when you joined KTC? Any surprises? I felt that the expectations for KTC were higher than I had imagined before joining the company, such as the relationship with group companies, the position of the company, and the role required to provide a new platform in the mobility industry. What is the atmosphere like on site? Although the company is centered around engineers and has a quiet atmosphere, it is also quite friendly. Slack chats and the emojis used make it also lively. You can tell it's a vehicle-related company, as many people decorate their desks with car models. How did you feel about writing a blog post? Even though I had the opportunity to create website content in my previous job, this is my first time sharing about myself, so I’m feeling nervous. Question from S.A. “What surprised or impressed you about joining KTC?” There are a lot of events such as study sessions. At least once every two weeks, some sort of event is held, and I was impressed by the proactive approach to receiving and sharing new information. nam ![alt text](/assets/blog/authors/nam/newcomers/icon-nam.JPG =250x) Introduction I joined KTC in February. I’m nam. I was a front-end engineer at a production company in my previous role. How is your team structured? I had the impression that it was a small team and the responsibilities were clearly divided among everyone. What was your first impression when you joined KTC? Any surprises? Orientation was very warm. I felt a strong message that all of us are moving forward in the same direction. What is the atmosphere like on site? There are team members nearby who are working on the same project, so I feel that everyone is working freely while consulting with each other. It was my first time working in a large office, so I had imagined that only the sound of the keyboard would resonate through the large space, but that was not the case, so I was relieved. How did you feel about writing a blog post? I’ve been reading this Tech Blog since before I joined the company, and now that I’m finally on the writing side, I’m feeling nervous. Question from J.O. What aspects make a website stand out to you from a front-end engineer’s perspective? I have a background in design, so I find the websites where design and technology are in harmony to be truly amazing. I believe that a great site is one that is well made. Sometimes there are sites where it's hard to imagine how much discussion goes from the planning stage, how engineers and designers communicate with each other, and how they understand each other's fields. So I think, "Wow, how robust, great!" when I see a harmonious site that excels technically and in design. KunoTakaC ![alt text](/assets/blog/authors/nam/newcomers/icon-kuno-takac.jpg =250x) Introduction I am Kuno from the KTC Management Department. I am in charge of Labor Systems in general (such as SmartHR, Recoru, Raku-ro, Kaonavi, etc.). My previous job was as an SE in a factory, and before that, I worked as a handyman (mainly handling infrastructure for small and medium-sized enterprises.). In 2023, I was classified as having a level 4 disability (lower limb paralysis), but there is no need for any special care. I usually have a cane so it's easy to recognize me, but please remember my face so you can still recognize me even when don't have it! How is your team structured? There are 11 people in the Management Department, out of which 2 of us are in the KTC Management Department. But in Nagoya... I’m the only one! Please rest assured that we are getting along well. What was your first impression when you joined KTC? Any surprises? As an IT company, I thought I would talk to the Management Department via some kind of system, but I was a little surprised that we talk face-to-face using a conference room. - ** What is the atmosphere like on site?** The office is generally quiet, but it’s easy to communicate, and we often have discussions together. Since the Management Department has open seating, it is convenient to choose a seat near the person you want to talk to. How did you feel about writing a blog post? I thought it would be helpful to mention that I created the #lumbar-care channel on Slack. I also additional jobs besides labor systems, so it is a little difficult. Question from nam "It has been 3 months since you joined the company. Have you noticed anything different from your previous job, or something unique to KTC? " To put it in one word, quiet. In my previous job, it was like a live house every day, with the noise of air conditioning in the server room resembling a jet engine, the vibration of machine tools that felt like an earthquake, the sounds of drum-like impact printers and electronic printers, and the warning tones of three-color patrol lights, all punctuated by telephones and SystemWalker alerts. There was only an on-premise environment In my previous job, and I was exposed to SaaS for the first time. on-premises solutions can be preferable sometimes, while other times SaaS are better. I’ve realized that each has its own pros and cons. M ![alt text](/assets/blog/authors/nam/newcomers/icon-m.jpg =250x) Introduction I decided to dive into a new environment because I wanted to take on the challenge of developing in-house products, which was difficult to experience in my previous job. How is your team structured? Our team develops a product to support car sales recommendations at dealerships, making them more efficient and advanced. We have a tech lead, front-end engineers, and back-end engineers. What was your first impression when you joined KTC? Any surprises? Before joining the company, I had the impression that it was a “mature startup,” so I thought I would be required to be autonomous and self-driven from the first day. However, I was a little surprised that the onboarding was thorough and took time, including hands-on training and dialogues with the president. Thanks to this, it helped me quickly learn about new domain knowledge and get to know the executives. What is the atmosphere like on site? In my development team, we work on multiple product developments side by side. To stay informed about each other’s tasks, we share information in our daily morning meetings, where each team member talks about what they are focusing on. I think we communicate frequently and chat casually when we are in the office. How did you feel about writing a blog post? I have never had the opportunity to post information on a blog, so it feels fresh and exciting. Question from KunoTakaC "What is your favorite storage solution? Please suggest something practical!" If you are having trouble keeping your phone and PC charging cables tidy on your desk or floor, try the cheero CLIP Multi-Purpose Clip ! It’s magnetic, so it’s super easy to attach and remove, letting you quickly tie up any messy cables. Plus, it’s flexible like a wire and keeps its shape, so you can even use it as a stand for your phone to watch videos! R.S ![alt text](/assets/blog/authors/NAM/newcomers/icon-rs.jpg =250x) Introduction I am R.S from KINTO ONE Development Division New Car Subscription Group. I am in charge of the front-end of KINTO ONE. How is your team structured? There are six people on our team. What was your first impression when you joined KTC? Any surprises? I was surprised by the high level of flexibility in work styles. As a working parent, the full-flex work schedule is incredibly helpful. What is the atmosphere like on site? In our weekly planning, we clarify individual tasks and proceed with our work diligently. How did you feel about writing a blog post? I didn’t expect to write it so soon, but after my first post, I became more aware of our company's blog. Question from M "How do you catch up when trying something new? Any learning tips?" When something interests me, I take a step forward and give it a try. I tend to explore broadly rather than deeply, diving into new challenges. Sometimes, past experiences in completely different areas connect unexpectedly, and I find that moment fascinating. Hanawa ![alt text](/assets/blog/authors/NAM/newcomers/icon-hanawa.jpg =250x) Introduction I am Hanawa, a front-end engineer at KINTO ONE New Vehicle Subscription Development Group. I mainly worked as a front-end engineer in my previous job. I would like to utilize the knowledge and experience I have gained so far in my work and improve my technical capabilities regardless of field. How is your team structured? We have 6 people on our front-end team. What was your first impression when you joined KTC? Any surprises? I was impressed by how generous the employee benefits are. What is the atmosphere like on site? Everyone is highly sensitive to technical updates and has the ability to communicate, so it is stimulating. I think it’s an environment where we can easily make suggestions. It appears that there are actual cases where services were developed from engineers’ ideas, so I have the impression that this kind of atmosphere is cultivated throughout the company. How did you feel about writing a blog post? I haven’t really shared much before, so I thought it was a great opportunity. I would like to write something about tech topics in my post, not just for this company’s entry. Question from R.S "What has changed significantly since joining the company? " Compared to my previous job, we have a much larger engineering organization here. (There were five engineers in my previous job.) To be honest, I haven’t fully understood who is working on which products yet. Various events such as study sessions are held regularly across departments, so I hope to participate in them to deepen my understanding. Taro ![alt text](/assets/blog/authors/nam/newcomers/icon-taro.jpg =250x) Introduction I am Taro, and I joined KTC Creative Group. How is your team structured? We are a team of 9 directors and designers. What was your first impression when you joined KTC? Any surprises? I felt a strong sense of teamwork during the orientation for new employees with the message, "We will work together as One Team and move forward." What is the atmosphere like on site? The team members are cheerful, friendly and very creative. Thanks to active communication, we constantly exchange opinions and ideas, creating a stimulating work environment. How did you feel about writing a blog post? I thought, "Oh, it’s that thing I read in the past logs of the Tech Blog." Question from Hanawa "What do you prioritize the most in your day-to-day work?" The "current situation and goals" in terms of "issues, needs, and value." S.A. ![alt text](/assets/blog/authors/NAM/newcomers/icon-sa.jpg =250x) Introduction I am S.A. and I joined the Data Analysis Division. How is your team structured? Including the leader and myself, we are 9 team members in total. What was your first impression when you joined KTC? Any surprises? I was impressed by how pleasantly relaxed the atmosphere was. What is the atmosphere like on site? Everyone has their own area of expertise, and it’s an inspiring environment. How did you feel about writing a blog post? I was nervous because it was my first time writing a blog, but I thought it was a good initiative. Question from Taro "It's been a month since you joined the company, but have you become more conscious of anything in your work? " I’m trying to keep up with the fast pace so I don't fall behind. Lastly Thank you all for sharing your thoughts after joining our company! At KINTO Technologies, we are continually welcoming new members every day! We look forward to sharing more onboarding entries from various divisions and team members. Moreover, KINTO Technologies is actively seeking professionals who can collaborate across different divisions and fields! For more information, click here !
アバター
はじめに 初めまして!モバイルアプリ開発グループでプロデューサーを担当しているHyugaです。 本日は面白いネタをひっさげ、初ブログ投稿をします!今、生成AIが話題となっておりますが、今回ご紹介するのは音楽生成AIです。この音楽生成AI、本当にすごいんです!端的にいうと、簡単な指示をするだけで、プロ顔負けの音楽生成ができてしまいます。しかも、伴奏だけではなく自然な発音での歌までつけてくれます。そして生成時間は数分程度。まさに神業です。自分が有名音楽クリエーターになった気分にもなれる最高の感覚を味わってください。 今回のネタは、この音楽生成AIを使って、勝手にKINTOテクノロジーズの社歌を作ってみた! というテーマでご紹介します! 音楽生成AI 「Suno AI」とは? 音楽生成AIはさまざま世に出ているのですが、私が活用したのはアメリカのSuno, Inc.というベンチャー企業で開発された「Suno AI」です。 Suno, Inc.は、元々KenshoというAIスタートアップで働いていた4人、Michael Shulman、Georg Kucsko、Martin Camacho、Keenan Freybergによって設立されました。 Suno AIは、2023年12月に初めてリリースされ、その後も継続的にアップデートされています。最新の安定版は2024年5月にリリースされたv3.5です。(2024年8月時点) Suno AIでの音楽生成手順を以下に簡単にまとめます。 なお、AIの進化スピードは凄まじいため、以下の流れは今後変わる可能性があります。 参考程度にご確認いただき、実際には最新の情報をご確認いただくことをおすすめします。 ウェブサイトにアクセス( https://suno.com/) Sunoの公式サイトにアクセスします。 ![](/assets/blog/authors/hyuga/20240814/sunoai1.png =630x) アカウント作成 無料アカウントを作成します​​。 Google、Discord、Microsoftなど各種アカウントで作成できます。 ![](/assets/blog/authors/hyuga/20240814/sunoai2.png =630x) 曲の作成セクションに移動 「Create」セクションに移動します​​。 テキストプロンプトの入力 曲の歌詞、スタイル(ジャンル)などテキストプロンプトを入力します​。 ![](/assets/blog/authors/hyuga/20240814/sunoai3.png =630x) 曲の生成 Suno AIが入力されたプロンプトに基づいて曲を生成します​​。 曲のダウンロード 必要に応じて、生成された曲をダウンロードしてさらに編集することができます。​ Suno AIを使ってオリジナル社歌を勝手に作ってみた(笑) さて、ここからはSuno AIの力をご紹介していきましょう! どうせデモンストレーションするなら面白い方が良いと思い、勝手にKINTOテクノロジーズの社歌を作ってしまうことにします! 作戦は以下のとおりです! KINTOテクノロジーズのマネージャー陣に会社の強みとは?のコメントをもらう! もらったコメントをキーワードにしてChatGPTに歌詞生成してもらう! 生成された歌詞をSuno AIにインプットして素敵な社歌を作ってもらう! こんな流れになるとは思うのですが、ここで一つ考えました・・・ うーん。面白みが足りない・・・ そうだ!自分で歌ってしまおう!!!(笑) ということで、生成された音楽から歌声をカットして、Hyuga自ら歌って披露する! を目標に進めることにします!!!(笑) テーマ変更 「Suno AIを使ってオリジナル社歌を勝手に作って、 勝手に歌ってみた(笑) 」 となります! マネージャー陣から届いたコメント 会社の強みについて、とても素敵なコメントがたくさん届きました! スピード感があり、魂さえあれば、誰もがチャレンジして、活躍することができる! 発信に積極的なエンジニアが多い モダンな開発環境 急拡大する開発組織 コミュニケーション好き、新しい技術への探究心がある、新旧のエンジニアがそれぞれいる 自分で手をあげれば色々取り組むことができる、全世界がスコープ、優秀な人材 新しいことにチャレンジしている などなど!ほんと素敵ですねー! ChatGPTにて生成された最高の歌詞! マネージャー陣から届いたコメントを丁寧にChatGPTに届け、素敵な歌詞を生成してもらいました! Yeah, yeah, Here we go! KINTOテクノロジーズ、Let’s go! スピード感、魂さえあれば 誰もがチャレンジできる、その可能性を感じて 発信に積極的、未来への道を切り拓く モダンな開発環境、革新の舞台を用意して We are the KINTOテクノロジーズ! コミュニケーション好きな仲間たち 新しい技術への探究心で 未来を築く、KINTOの道を行こう 魂があれば、どんな壁も乗り越えられる 夢を追いかけて、果てしない世界へ KINTOテクノロジーズ、Let’s Go! 未来を切り拓く、その先へと進もう... 急なリクエスト ここまで記事を書き進めていると... このテックブログの翻訳担当の方よりDMが... Hyugaさん、翻訳用にオリジナルソングの英語版作れませんか!? なんと!そうきましたか(笑) こんな素敵なリクエスト断る理由がありません! 考える余地もなく、即答で「できます!」と返信しました(笑) ただ、日本語版と同じテイストで作成しては面白くないと考え、 英語版ということで洋楽になるわけなので、少しお洒落な感じにできたらいいなっと考えました! そこでオリジナルソングは以下のジャンル指定で生成してみました。 日本語版:疾走感のある爽やかなロックソング 英語版:落ち着きがありながらも明るくなれるジャズソング さて、どんな完成になるのでしょうか!? 神曲ができてしまった(笑) 完成曲を聴いた瞬間、一筋の涙が...w あまりにも完成度が高い!高すぎる!! 最高のオリジナル社歌ができてしまったので、 前置きはここまでにして披露したいと思います!ぜひお聴きください!! まずは日本語バージョンです! https://www.youtube.com/watch?v=zmv06e8cTFI 続いて英語バージョンです! https://www.youtube.com/watch?v=tCIkXdUv9NA まとめ いかがでしたでしょうか?予想以上にクオリティの高い神曲になっていませんか!? 私の今後の目標はこのオリジナル社歌を正式な社歌に認めてもらうことです(笑) ぜひ、私と同じく感動の音楽生成AI体験をしてみてください。無料アカウントでも1日10曲生成できます。 1回の生成で2曲作られるので、無料チャレンジは5回です。AIが奏でる音の世界へ、一歩踏み出してみよう! 最後まで拝読いただき、ありがとうございました!
アバター
はじめに こんにちは。ご覧いただきありがとうございます! KINTO FACTORY (以下 FACTORY)という今お乗りのクルマをアップグレードできるサービスで、フロントエンド開発をしている中本です。 今回は、先日リリースしました FACTORY マガジン の記事コンテンツを、執筆・管理するために導入しました OSSのツール、 Strapi について紹介したいと思います。 Strapi とは Strapi とは、セルフホスト型のヘッドレスCMSで、SaaS 等で提供されている他のCMSサービスとは異なり、自身でサーバーやデータベース等の環境を用意し運用していくものになります。(Strapi には、クラウド環境も用意してくれる Strapi Cloud も用意されています。) KINTO では SaaS の CMSサービスを利用し、コラムなどの記事を掲載しているサービスがすでにいくつか有りましたが、運用面・コスト面の両立の難しさや、新しい機能追加や既存機能のカスタマイズ領域が少なかったりと課題感が分かってきたので、自前で用意することを視野に OSS CMS ツールの選定を行ってきました。 OSS のヘッドレスCMS と聞いて、よく耳にするのは Wordpress が有名かもしれませんが、他にも最近勢いのあるツールを調べたところ、Strapi について知ることとなりました。 様々なOSSツールの中で、我々が重視したポイントは以下のようになります。 使いやすさ :管理UIが直感的で、開発者とコンテンツ管理者の両方に優しい設計 コミュニティサポート :大規模なコミュニティによるサポートと豊富なドキュメント 豊富なプラグイン :多様なプラグインが提供されており、簡単に機能を拡張できる スケーラビリティ :Node.jsベースのため、高いパフォーマンスとスケーラビリティを実現 もし足りない機能や我々の使い方に合わない部分が出てきた場合でも、プラグインの更新・作成などは Java Script の知識があれば簡単に行えそうで、導入コストが低そうな事も決め手となりました。 アーキテクチャとデプロイの仕組み Strapi は、FACTORY のECサイトと同様に AWS にてホスティングされており、ECS や Aurora を使ったシンプルなアーキテクチャで構成されています。CMS プラットフォームとしては、FACTORY の WEBアプリケーションからは独立させ、Strapi を使用するのは、主に事業部などの社内の部署で、記事の執筆や公開に限定することとしました。 記事を公開する際は、WEBアプリケーションのビルドを走らせ、その際に Strapi の API から記事情報を取得しページへ埋め込むようにしました。その結果、ユーザーのクライアントから直接 Strapi へはアクセスせず、CMS環境は閉じたネットワークとなることで、外部からの不要なアクセスを無くした形となります。 カスタマイズ事例 次に、導入を進めるにあたって、カスタマイズした部分をいくつか紹介していきます。 新規プラグインの作成 前述した FACTORY マガジンの中に お客様の声 という、実際にFACTORYで商品をご購入し施工頂いたお客様のインタビュー記事を載せているコンテンツがあります。 こちらの記事には、施工を行った車種の情報や商品を紐づける必要があり、デフォルトでの入力BOXでは「車種の名前(例:RAV4)」や「商品名(例:ヘッドランプデザインアップグレード)」を直接入力してもらう形となってしまいます。 車両情報入力 ただし、上記のようなフリーでの入力となると、名前を間違って登録してしまう可能性があったり、ブログのように同じ車種や商品での記事検索を行うには、FACTORY 内部で持っている商品IDなども記事に紐づいていたほうが、後々活用しやすいかと思い、EC サイトのフロントエンドで呼んでいる BFF から、これらの入力BOXのリストを作ることにしました。 車両選択 商品選択 このように、作成したカスタムプラグインにより、商品・車種情報をミス無く紐づけることができ、更に画像も出てくるようにカスタマイズしたので、執筆者が直感的に選択できるようになったかと思います。また、このように EC サイトで使っている BFF を使いまわせたりできるのも、セルフホスト型の利点かなとも思いました。(SaaSなどのサービスからだと、どうしてもセキュリティリスクや、上記のような柔軟なカスタマイズには少し不向きかと思われます) :::message カスタムAPIを作成した際の記事がすでに公開されておりますので、そちらもご一読頂けると幸いです。 StrapiにカスタムAPIを実装する ::: 既存プラグインのカスタマイズ 別のカスタマイズ事例としては、記事にタグを紐づけることにより、同じタグで同様の記事を検索するという要件を実現するため、こちらの tagsinput という既存のプラグインを導入しました。 ただ、こちらのプラグインは、入力したタグを [{ name: tag1 }, { name: tag2 }] のように連想配列として、データベースへ保存しており、タグから検索する API を作る際に、検索ロジックが複雑になっていました。 そこで、検索をよりシンプルにするため、プラグインを少しカスタマイズし、下記のように入力されたタグを単純に文字列の配列 [tag1, tag2] として保存する形に変更しました。 https://github.com/canopas/strapi-plugin-tagsinput/blob/1.0.6/admin/src/components/Input/index.js#L29-L36 @@ -26,8 +26,7 @@ const { formatMessage } = useIntl(); const [tags, setTags] = useState(() => { try { - const values = JSON.parse(value); - return values.map((value) => value.name); + return JSON.parse(value) || []; } catch (e) { return []; } https://github.com/canopas/strapi-plugin-tagsinput/blob/1.0.6/admin/src/components/Input/index.js#L64-L70 @@ -38,7 +37,7 @@ onChange({ target: { name, - value: JSON.stringify(tags.map((tag) => ({ name: tag }))), + value: JSON.stringify(tags), type: attribute.type, }, }); このように既存のプラグインを、少しだけ我々の使い方にあわせるように修正を加えるといった事も簡単に行うことができます。 その他にも様々なカスタマイズを行ってきましたが、その中でも Strapi の中で記事の投稿に使われているリッチエディターである CKEditor に、Video タグを埋め込めるようにしたカスタマイズについて、少し長くなりそうなので別記事にて紹介する予定です。 最後に KINTO のサービスの中で、まずは FACTORY が先陣を切って OSS の CMSツールである Strapi を導入しました。リリースした Strapi を使って記事の執筆・投稿している事業部からは、「SaaS のCMS サービスより使いやすくなった」などの意見を頂けていて、上々の滑り出しとなったかなと思っております。また「こうして欲しい」などの要望も出始めているので、OSS でのカスタマイズ性を武器にそれらに応えていければと思っております。 運用も開始したばかりですが、色々な経験値をためて、記事の執筆に留めることなく、Strapi を使った新しいソリューションを考えていくのも良さそうです。KINTO にて SaaS のCMSを使っている他のサービスについても、FACTORY での使い方を共有し社内での横展開を目指して行こうと考えております。
アバター
はじめに こんにちは、KINTO テクノロジーズ ( 以下、KTC ) SCoE グループの桑原 @Osaka Tech Lab です。 SCoE は、Security Center of Excellence の略語で、まだ馴染みのない言葉かもしれません。KTC では、この 4 月に CCoE チームを SCoE グループとして再編しました。SCoE グループについて知りたい方は、 クラウドセキュリティの進化をリードする SCoE グループ を参照ください。 また、KTC の関西拠点である Osaka Tech Lab については、 Osaka Tech Lab 紹介 をご参照ください。 本ブログでは、2024 年 7 月 4 日から 6 日に開催された『第 28 回サイバー犯罪に関する白浜シンポジウム』の参加レポートをお届けします。 まず最初に、"白浜" という場所をご存じない方へ。 白浜とは、和歌山県白浜市のことです。白浜は、美しい海、砂浜、そして温泉と、魅力あふれる観光地となっています。また、国内で最大の 4 頭のパンダを飼育しているアドベンチャーワールドも白浜にあります。 シンポジウム参加者の皆さんは、サイバーセキュリティの知見を深めるだけでなく、魅力あふれる白浜も堪能されたのではないでしょうか。 シンポジウム概要 テーマは、「激変する環境、複雑化するサイバー犯罪にどう立ち向かうのか?」です。 "サイバー犯罪" というイベント名ではありますが、最近発生している一般的なセキュリティ脅威や話題についての講演やパネルディスカッションも行われました。このシンポジウムでは、「サイバーセキュリティはひとつの組織で守れるものではない」という理念のもと、企業、官公庁、教育機関などの横のつながりを大事にしています。そのため、「その場限り」の話も多く、現地に行かなければ聞けない「生の声」を得ることができました。 昼の部は、 和歌山県立情報交流センターBig・U 、夜の部は、約 8 km移動して、 ホテルシーモア で開催されました。 (昼の会場が、実は白浜市ではなく隣の田辺市にあることについては、突っ込んではいけません) シンポジウムでは、多くの興味深い講演や発表がありましたが、ここでは私が印象に残ったキーワードを 2 つご紹介します。プログラムの一覧については 公式サイト をご確認ください。 キーワード 1 : 組織を越えた協力体制 このシンポジウムでは、ネットワーキングを非常に重要視しています。 シンポジウム実行委員長の挨拶や複数の講演者が、「脅威に対抗することは一企業・一組織単独では困難であり、点ではなく面で守ることが重要」と強調していました。 これは、サイバー攻撃の複雑化や多様化に対処するためには、異なる業界や産官学の壁を越えた協力が不可欠であることを示しています。 企業間の情報共有や官公庁・警察機関との連携、そして教育機関との協力が、より強固なセキュリティ対策を実現するための鍵となる共通認識を持つことが重要ということです。 警察関係者も多く参加しており、民間企業の方と意見交換をされていました。実際、このシンポジウムで私に最初に名刺交換の声掛けをいただいたのは某県警の方でした。 また、セキュリティインシデントは自社だけで対応することが困難であり、各組織の経験やノウハウを共有し、効果的なセキュリティ対策を実施することが重要であることも強調されていました。 夜の BOF(Birds Of a Feather)では、組織や業界を越えた同じ悩みを持つ参加者が集まり、活発な意見交換が行われました。 キーワード 2 : 生成 AI とセキュリティ トレンドである生成 AI のセキュリティについて取り上げた講演が複数ありました。その中でも特に印象的だったのが富士通研究所様の講演です。この講演では、生成 AI に関するセキュリティの最新動向と実践的な知識を提供してくれました。 富士通研究所様の講演で強調されていたのは、「 AI で守る」と「 AI を守る」の両面でセキュリティを考慮する必要があるということです。 AI で守る : サイバーセキュリティの防御手段としての AI セキュリティインシデントを予防するための AI 「 AI で守る」分野では、既存のセキュリティ適用範囲が「生成 AI を使って守れる」ことによって大きく拡大しています。富士通研究所様で実施している、セキュリティ AI コンポーネントを拡充し、DevSecOps をフレームワーク化するという取り組みをご紹介いただきました。 AI を守る : AI に潜む脅威、AI への攻撃 AI を攻撃から守る 「AI を守る」分野では、生成 AI がもたらすリスクについても詳しく説明されました。生成 AI に対するサイバー攻撃の具体的な手法や、それに対する対策アプローチについても言及されていました。AI への攻撃は「情報を盗む」と「 AI を騙す」について、具体例を交えてご紹介いただきました。 この講演では、生成 AI を活用したプロダクトを構築する際に考慮すべきセキュリティ観点が体系的にまとめられており、非常に参考になりました。例えば、生成 AI の開発プロセスにおけるセキュリティガイドラインの策定や、ガードレール・脆弱性スキャナのサンプルなど、具体的なガイドラインへの落とし込みに役立つインプットが得られました。 来年参加される方向けの TIPS 来年参加される方向けの TIPS もいくつかご紹介します。 チケット確保 : この白浜シンポジウムも含めて温泉シンポジウム系(道後、越後湯沢、熱海、九州)は、非常に人気が高く、プラチナチケットとなっています。発売開始日は必ず確認しておき、早めに確保することをお勧めします。また、昼食のランチ(弁当)チケットも合わせて購入をお勧めします。これは会場内・会場周辺で昼食を確保できる場所が限られているからです。 交通手段の確保 : 白浜駅から会場までシンポジウムが提供するシャトルバスがありますが、時間の融通は利きません。公共交通機関での会場移動は難しいため、シャトルバスの時間には十分気を付ける必要があります。レンタカーの利用も一つの手段です。(私は会社の許可を得て、自家用車で参加してましたので、本当に助かりました。) 宿泊先の選定 : 宿泊先は、シャトルバスの利用を考慮すると、夜の会場(ホテル)に近い場所を選ぶと便利です。夜の会場周辺は、温泉地ですので多くの宿泊施設があります。 ネットワーキング : 名刺を大量に持っていくことをお勧めします。ネットワーキング重視のシンポジウムですので、積極的に交流した方が得るものが多くなります。 まとめ サイバーセキュリティは、組織や業界を越えたつながりが重要です。現地でしか聞けない「生の声」は、本当に貴重なものがあります。 このような有益なシンポジウムを開催してくださった実行委員や講演者の皆様、スポンサー企業の皆様、そして参加者の皆様に感謝いたします。 皆さんも来年、白浜の美しい夕日を眺めながら、サイバーセキュリティにどっぷり浸かってみてはいかがでしょうか。 さいごに 私の所属する SCoE グループでは、一緒に働いてくれる仲間を募集しています。クラウドセキュリティの実務経験がある方も、経験はないけれど興味がある方も大歓迎です。お気軽にお問い合わせください。 詳しくは、 こちらをご確認ください。
アバター
Hello, I'm Chris, a frontend developer in the Global Development Division at KINTO Technologies. When developing frontend components, you’ve probably heard about using props to pass necessary information. Popular frameworks such as Angular, React, Vue, and Svelte each have their own ways of implementing this functionally for passing data between components. Discussing all of these would make this article very lengthy, so I will focus on Vue, which is commonly used in our Global Development Division. When considering component reusability, relying solely on props may not be sufficient. This is where slots come in. In this article, I will explain both props and slots, comparing their usage through practical examples. Passing information with props For example, let’s say you need to implement a reusable component for a table with a title. By passing props for the title, headers, and data, you can easily achieve this. #Component <template> <div> <h5>{{ title }}</h5> <table> <tr> <th v-for="header in headers" :key="header">{{ header }}</th> </tr> <tr v-for="(row, i) in data" :key="i"> <td v-for="(column, j) in row" :key="`${i}-${j}`">{{ column }}</td> </tr> </table> </div> </template> <script> export default { name: 'DataTable', props: { title, headers, Data }, } </script> #The parent that calls the component <template> <DataTable :title="title" :headers="headers" :data="data"/> </template> <script> // Omit the import statement for the component export default { data() { return { title: 'Title', headers: ['C1', 'C2', 'C3', 'C4'], data: [ { c1: `R1-C1`, c2: `R1-C2`, c3: `R1-C3`, c4: `R1-C4`, }, { c1: `R2-C1`, c2: `R2-C2`, c3: `R2-C3`, c4: `R2-C4`, }, ] } }, } </script> Using the code above, you can create the following table. (I've added some simple CSS styling, but that's not relevant to this article so I won't go into details here.) To elaborate a bit on the use of props, Vue.js allows for simple type checking and validation of the data received from the parent component, even without using TypeScript. The following is an example, but for more details, please check the Vue.js official documentation . (Adding such settings to all sample code in this article would make it too long, so it’s omitted.) <script> export default { props: { title: { // A prop with a String type. When a prop can have multiple types, you can specify them using an array, such as [String, Number], etc. type: String, // This prop must be provided by the parent component. required: true, // Prop validation check. Returns a Boolean to determine the result. validator(value) { return value.startsWith('Title') } }, }, } </script> Issues with using only Props While props are indeed convenient for specifying types and validating values, they can sometimes feel inadequate depending on what you want to achieve. For example, have you ever encountered requirements like these? Make the value displayed in table cell bold, italic, or change the text color based on conditions. Display one or more action buttons in each row of the table, which can be disabled based on conditions. These requirements make sense, but implementing them only using props can lead to complex code. To change the style of a cell, you might need to pass the logic for determining the style as a prop, or add markers to the data objects indicating which values need style changes. Similarly, to add buttons to each data row, you would need to pass the button information as props to the component. For example, if you extend the initial sample code, it would look like this: <template> <div> <h5>{{ title }}</h5> <table> <tr> <th v-for="header in headers" :key="header">{{ header }}</th> </tr> <tr v-for="(row, i) in data" :key="i"> <!-- The class information is retrieved using a function that evaluates the received styles --> <td v-for="(value, j) in row" :class="cellStyle(value)" :key="`${i}-${j}`" > {{ value }} </td> <!-- If there are buttons, prepare a separate column for them --> <td v-if="buttons.length > 0"> <button v-for="button in buttons" :class="button.class" :disabled="button.disabled(row)" @click="button.onClick(row)" :key="`${button}-${i}`" > {{ button.text }} </button> </td> </tr> </table> </div> </template> <script> export default { props: { title, headers, data, // To receive the cell style logic as a prop cellStyle, // To receive button information as a prop buttons, }, } </script> <template> <!-- Passing the function that returns class information and button-related information as props --> <DataTable :title="title" :headers="headers" :data="data" :cell-style="cellStyle()" :buttons="buttons" /> </template> <script> export default { data() { return { // Other data information is omitted. Buttons: [ { text: 'edit', class: 'btn-primary', disabled: (rowData) => { // Logic to determine whether to disable the button }, onClick: (rowData) => { // Logic after button press }, }, { text: 'delete', class: 'btn-danger', disabled: (rowData) => { // Logic to determine whether to disable the button }, onClick: (rowData) => { // Logic after button press }, }, ], } }, methods: { cellStyle() { return (val) => { // The logic that returns the necessary style class information } } } } </script> Here is a screenshot showing the result of applying styles to cell text based on conditions and disabling buttons as needed. However, if you want to further control the HTML structure within the cell (e.g., adding <p> , <span> , or <li> tags, etc.), you will need to pass the HTML code as a string prop to the child component and use v-html to render it. While v-html is a convenient method to render HTML, it can make the code harder to read when you include a lot of dynamic elements because you are constructing the HTML code as a string. In summary, when using only props, you need to carefully consider how the child component will receive the data . Using slots to complement the limitations of props This is where the slot feature comes in. As explained in the Official Documentation , you can create a slot in a component and pass HTML information from the calling template into that slot, enabling you to implement the desired content within the specified slot area. The illustration above is a conceptual image. The box on the left represents a component using props, while the box on the right represents a component using slots. In the case of props, each entry point is narrow and the type is fixed, meaning that the developer can only pass information defined by the component. In contrast, slots provide a much wider entry point, giving the developer more control over what is passed to the component. For example, let's try implementing a data table using slots as described above. <template> <div> <!-- default slot --> <slot /> <!-- table slot --> <slot name="table" /> </div> </template> <template> <DataTable> <!-- When you write HTML code inside the component, it is automatically placed into the slot declared on the component side --> <!-- If not wrapped in a template, it is placed into the default slot --> <h5>Title</h5> <!-- Placed into the slot named table --> <template #table> <table> <tr> <th v-for="header in headers" :keys="header">{{ header }}</th> </tr> <tr v-for="(row, i) in data" :keys="`row-${i}`"> <td v-for="(column, j) in row" :class="{ 'font-italic': italicFont(column), 'font-weight-bold': boldFont(column) }" :key="`row-${i}-col-${j}`" > {{ column }} </td> <td> <button :disabled="editDisabled(column.c1)" @click="edit(column.c1)">Edit</button> <button :disabled="destroyDisabled(column.c1)" @click="click(column.c1)">Delete</button> </td> </tr> </table> </template> </DataTable> </template> <script> export default { // Omit data information methods: { edit(id) { // Logic to edit the data for that row }, destroy(id) { // Logic to delete the data for that row }, italicFont(val) { // Logic to determine whether the font should be Italic }, boldFont(val) { // Logic to determine whether the font should be bold }, editDisabled(id) { // Logic to determine whether the edit button should be disabled }, destroyDisabled(id) { // Logic to determine whether the delete button should be disabled } }, } </script> In this example, no props are passed to the component, making the code look quite clean. However, there's one issue: it allows developers to implement anything they want. For instance, when using the component mentioned above, it is expected that developers use the specified tags (such as for the title and for the table, etc.) and apply the appropriate styles. Yet, due to a lack of communication or insufficient implementation knowledge, developers might use different tags. This can lead to differences in appearance, and you'll need to double-check during testing to make sure it fits on each screen size. <template> <DataTable> <!-- Use h1 instead of h5 --> <h1>Title</h1> <template #table> <!-- Use <div> instead of <table>, <tr>, and <th> --> <div> <div> <div v-for="header in headers" :keys="header">{{ header }}</div> <div></div> </div> <div v-for="(row, i) in data" :keys="`row-${i}`"> <div v-for="(column, j) in row" :key="`row-${i}-col-${j}`"> {{ column }} </div> <div> <button :disabled="editDisabled(column.c1)" @click="edit(column.c1)">Edit</button> <button :disabled="destroyDisabled(column.c1)" @click="click(column.c1)">Delete</button> </div> </div> </div> </template> </DataTable> </template> Using <div> tags for everything in a table may seem like an extreme example, but it‘s safer not to give more freedom than necessary. While interpretations may vary by company or team, my ideal approach is to consult with designers and allow flexibility only where necessary. Before deciding whether to use props or slots, it’s important to determine which parts of the implementation can be flexible and which must follow the specific guidelines. <template> <div> <!-- Use props to ensure the text is always placed in <h5> --> <h5>{{ title }}</h5> <!-- Ensure the use of <table> tag --> <table> <tr> <!—Use props to pass header information to enforce the use of <th> --> <th v-for="header in headers" :key="header">{{ header }}</th> </tr> <!-- Dynamically generate slots based on the number of rows in the passed data --> <!-- Use v-bind to pass data to the parent template --> <slot name="table-item" v-for="row in data" v-bind="row" /> </table> </div> </template> <script> export default { data() { return { title, headers, data } } } </script> <template> <!—Pass the title and headers as props --> <DataTable title="Title" :headers="headers" :data="data"> <!-- Receive the v-bind data on the component side --> <template #table-item="row"> <!—Accept a row of data and define how it should be displayed --> <tr> <td v-for="(column, i) in row" :key="`col-${i}`"> {{ column }} </td> <td> <button :disabled="editDisabled(column.c1)" @click="edit(column.c1)">Edit</button> <button :disabled="destroyDisabled(column.c1)" @click="click(column.c1)">Delete</button> </td> </tr> </template> </DataTable> </template> By the way, when using slots, you can utilize this.$scopedSlots on the component side to check which slots are being used by the parent and how they are being utilized. There are various use cases; for instance, you can determine what tags are being used within a slot. This provides mild form of validation against the issue of excessive flexibility with slots that was mentioned earlier. <template> <DataTable title="Title" :headers="headers" :data="data"> <template #table-item="row"> <!-- Notify the developer in some way if it is not <tr> --> <div> <td v-for="(column, i) in row" :key="`col-${i}`"> {{ column }} </td> </div> </template> </DataTable> </template> Summary To sum up, although using props is the easiest way to develop reusable components with Vue, it lacks flexibility, as shown in the examples in this article. On the other hand, using slots gives developers more freedom in their implementation, but for various reasons, this freedom can lead to unexpected methods being used, making it difficult to ensure quality. Therefore, it is important to involve stakeholders in the component’s development to decide in advance how much flexibility each part of the component should have. Based on these decisions, you can use props and slots appropriately to balance control and flexibility. Furthermore, providing thorough documentation will help ensure that users of the component understand the specifications and the intended usage.
アバター
KINTOテクノロジーズで my route(iOS) を開発しているRyommです。 先日7/5に 室町情報共有会 LT大会 🎋七夕スペシャル🎋 を開催しました! 今回はこちらで発表した「Slackを使いこなせ!Slack効率3000倍」というLTの文字起こしバージョンです。 資料はこちら @ speakerdeck Motivation みなさん、Slackは使いこなせていますか? Slackには様々な機能があります。それらを使いこなせるようになると、業務のちょっとした困りごとなどを改善できるようになるかもしれません! ここではSlackの基本的な機能を超特急で紹介します。 この機能はあれに使えそう!や、これとこれを組み合わせたら面白そう!など考えながら読んでみてください。 Mastering "Search" まずは、検索機能です! Slackの検索では、クエリを駆使することで様々な使い方をすることができます。 Googleなどでも定番のフレーズ検索やマイナス検索はもちろん、日付の範囲を指定したり、チャンネルを指定したり、それからリアクションスタンプから検索したり、メッセージなどの共有元を指定して検索したりできます。 もちろん、クエリを覚えなくてもGUIでフィルター検索することもできますが、いつも見るものであればクエリをメモしておいて検索バーに貼り付けるだけで検索できるため、速度向上につながります。 https://slack.com/intl/ja-jp/help/articles/202528808-Slack-内で検索する Mastering “Custom responses” 続いて、カスタムレスポンスです! slackbotに任意のレスポンスを設定することができます。 Slackbotのカスタマイズページを開くと設定することができます。 左の部分に設定したいずれかのフレーズを呼び出すと、その隣のいずれかの回答をランダムで返します。その性質を利用して、右のようにサイコロをランダムに回したりすることができます。 https://slack.com/resources/using-slack/a-guide-to-slackbot-custom-responses Mastering “Mentions” メッセージを広く周知したい時にはチャンネルメンションやhereメンションを使います。 ワークスペース全体にメンションできる @everyone もありますが、まず使う機会はないです。 これらのメンションは通知を切っている場合や、スレッドの中では使うことができません。 また、基本的に全体メンションは通知が汚染されやすくなるため、時と場合を考えて使いましょう。 https://slack.com/help/articles/202009646-Notify-a-channel-or-workspace Mastering “User groups” そこで、ユーザーグループを活用します。 ユーザグループを作成すると複数人をひとまとめにメンションできたり、グループごとチャンネルに追加したりできます。 さらに、グループに対して紐づくチャンネルを登録できるため、オンボーディングなどで複数のチャンネルに追加したい時も、ユーザグループに追加するだけでOKになります。 https://slack.com/help/articles/212906697-Create-a-user-group Mastering “Stock-type information” 情報は、ストック型とフロー型の2つに大きく分類することができます。 ストック型情報は蓄積される情報、フロー型情報は流れる情報です。 Slackにおける通常のやり取りはフロー型情報にあたり、流されては困るストック型情報はConfluenceなどにまとめていることが多いでしょう。 Slack内でもフロー型情報をストック型として置いておく方法がいくつかあります。 あとで見る、ピン留め、キャンバス、ブックマーク、そしてリストです。 ワークフローや検索との相性もいいので、色々組み合わせて連携しやすいです。 Mastering “Notification” 通知設定もかなり自由に設定することができます チャンネルをセクションで分けて、それぞれに対してミュートや表示設定をしたり、キーワードを設定して特定のフレーズが呼ばれた時に通知したり、Reacji Channelerというアプリを使うと特定の絵文字をリアクションされた時に特定のチャンネルに転送して通知させたりすることもできます。 Mastering “Reminder” 指定のチャンネルにリマインダーを送ることができます。 繰り返し設定することも可能です。 ワークフローやメッセージのあとで送信機能との使い分けを工夫してみると良いでしょう! https://slack.com/intl/ja-jp/help/articles/208423427-リマインダーを設定する Mastering “Huddle” Slack上で通話ができるハドル! ハドルのいいところはなんといってもチャットなどをそのままSlack上に残せるところです! そしてURLを知らなくても通話に参加できるので、飛び入り参加をしやすいところも利点です。 ハドルへ参加するURLを作成することもできるので、Outlookの予定表から飛ぶようにすることもできます。 一人でもくもくしている時にも、ハドルのゴキゲンミュージックを楽しむことができます。 Mastering “Email forwarding” Slackチャンネルにメールを送ることもできます。 メーラー側でチームに共有したいメールをフィルターして、発行したチャンネル用メールアドレスに転送するなどして使うことができます。 Mastering “Workflow” Slackにおける自動化で欠かせないのがワークフロー! 問い合わせのテンプレートを統一させたい時や、集めたデータをスプシに集約させたい時、それからアクションを組み合わせたい時、チャンネルに加入した人へのオンボーディングなど、多くの場合に活用できます。 スプレッドシートやGASなどを組み合わせることもでき、さらに可能性は広がります。 https://slack.com/intl/ja-jp/help/articles/17542172840595-ワークフローを作成する---Slack-でワークフローを作成する Mastering “Custom App” ワークフローでは実現が難しいことをしたい場合、SlackAPIを使用してカスタムアプリを作ることもできます。 最近ではSlack CLIというものが登場し、より簡単にカスタムアプリの構築ができるようになりました。 コードを書く必要はありますが、自由度がグッと上がり、様々なことがSlack上で実行可能になります。 https://api.slack.com/docs おわりに 意外と知らない人多いんだなーと思ってLTで話してみたのですが、ブログにしてほしい!という声を割といただいたので、ブログにも起こしてみました。 どのくらい使いこなせていましたか?知らなかった機能があれば、ぜひ今日から活用してみてください!
アバター
🎉 iOSDC Japan 2024のゴールドスポンサーとして協賛します こんにちは!もうすぐお盆シーズンですね。今年は子供と一緒に遊びに行こうと思っています。ヒロヤ (@___TRAsh) です。さて、今回は僕たちiOSチームからお知らせです。 KINTOテクノロジーズはiOSDC Japan 2024のゴールドスポンサーとして協賛します🙌 今回、初出展となるiOSDC Japan 2024ではコーディングクイズを開催します。オリジナルノベルティも配布する予定なので、ぜひ、ブースにお越しください! ということで、せっかくの機会なので、今回はiOSメンバーにインタビューをしました。どういったメンバーが居るのか参考になれば幸いです。 🎤 インタビュー 今回はKINTO Unlimited iOSチームにインタビューをしました。 KINTO Unlimitedは海外メンバーも多く、多国籍なチームです。日々の業務も基本英語(ほとんどのメンバーが日本語も話せる)で行われています。 T.O.さん ──簡単に自己紹介をお願いします こんにちは、T.O.です。KINTOテクノロジーズでiOSエンジニアをしています。前職までWebのフロントエンドをやっていて、モバイル開発はここにきて初めてで、GitHubも初めて業務で使いました。入社して2年半くらいですが、いろいろなプロジェクトに携わって色々なアーキテクチャやモダンな開発手法を学びました。 ──この会社に来て変わったことは何ですか? iOSエンジニアとしてのスキルが身についたことです。あと、KINTOテクノロジーズに入社すると共に上京してきたので、生活環境も変わりました。当時はコロナ禍だったので、人が少なく、初めて上野公園でパンダを見れたのは印象的でした。 ──この会社のいいところは何ですか? すごく話しかけやすい雰囲気があります。プライベートなことでも技術的なことでも気軽に相談できますね。出社でもリモートでも気楽に話せるのはありがたいです。あと、子育て世代にはありがたい福利厚生が充実しているのもいいですね。 ──今後チャレンジしていきたいことは何ですか? 今後はARやMLを使った開発をもっとしていきたいですね。ちょうどそういった分野に関わるプロジェクトにいるので、もっと極めていきたいです。あと、子供とゲームして遊びたいですね。 ──最後に一言お願いします! すごく幅のきく働き方ができて、とても働きやすいです👍 V.V.さん ──簡単に自己紹介をお願いします ロシア出身のロシア人です、iOSを8年ぐらいやっています。趣味はTRPGと子育てです。今まではロシアでWindowsのデスクトップアプリを作ったり、アメリカで救急車のサービスを作ったりしていました。日本に来て数年別の会社で働いて、KINTOテクノロジーズに入社しました。 ──この会社に来て変わったことは何ですか? 今まで小さな会社で働いていて、子供もいるので不安がありましたが、KINTOテクノロジーズはトヨタのグループ会社なので安心して働けています。給料ベースはそんなに変わらないけど、手当が多いので、かなり良くなったと思う。ボーナスでダブルベッドを買えました。 ──この会社のいいところは何ですか? 今まで、小さなチームで働いていたので、技術的な相談やメンタリングできる人がいなかったのですが、KINTOテクノロジーズは技術的な相談がしやすいです。また、自分の知識を共有できる機会もあれば、経験豊富なメンバーも多いので、刺激が多いです。 ──今後チャレンジしていきたいことは何ですか? ARやMLの機能開発をすることがあってもっと興味が出てきたので深ぼっていきたいです。また、家族をしっかり養っていきたいです。 ──最後に一言お願いします! 成長する機会があるから来てね✋ S.C.さん ──簡単に自己紹介をお願いします 韓国で生まれて、カナダに移住して今は日本にいるカナダ人です。日本の映画や文化に興味があって、友人も多くいたので日本に来ました。日本に来て10年ぐらい経ちます。前職ではバックエンドもやっていました。現在はKINTO Unlimited iOSチームのチームリードをしています。 ──この会社に来て変わったことは何ですか? プロジェクト問わずモバイルエンジニアがあつまったグループなので、iOS開発にフォーカスしたスキルアップがしやすいです。また、日本語で業務する時間も増えたのは嬉しいです。 ──この会社のいいところは何ですか? 前職ではシステムが巨大でほとんどメンテナンスに近い仕事をしていましたが、KINTOテクノロジーズでは新機能の開発が多く、新しい技術を学ぶ機会が多いです。モダンな技術を取り入れやすいのもいいですね。あと、勉強会を積極的に行なっていて、ナレッジシェアの機会が多い事も良いです。 ──今後チャレンジしていきたいことは何ですか? チームリードになったばかりなのでリーダーシップを伸ばしていきたいですね。それと、OSによる実装差異が気になるので、Androidの知識も広げていきたいです。あと、大学院に行っているんですが、卒業したいです。 ──最後に一言お願いします! 今まで出来てなかったモダンな開発環境で色々な経験ができて楽しいです👍 🚙 まとめ 弊社はまだまだ成長中の会社ですが、始まって間もないプロダクトも多いので、モダンな手法を取り入れやすい開発環境になっています。 多様性のあるチームで働くことができるので、新しい文化や技術を学ぶ機会が多いです。そんな環境で働いてみたい方は、ぜひ弊社にご応募ください! https://www.kinto-technologies.com/recruit/ ということで、 :::message チャレンジトークンはこちらになります! #KTCでAfterPartyやります ::: 9/9(月)にTimeTree様、WealthNavi様と3社合同で初のコラボイベントiOSDC JAPAN 2024 AFTER PARTYを開催します🥳 場所は日本橋室町にある弊社オフィスになります🗺️ こちらにもぜひご参加ください! https://kinto-technologies.connpass.com/event/327743/ 当日は猛暑になることが予想されます。水分補給をしっかりして、楽しいイベントをお過ごしください! ブースで皆さまのご来場をお待ちしております✋
アバター
Introduction At KINTO Technologies' Platform Engineering team, we were not fully satisfied with our current logging solution. With new AWS services available, we saw an opportunity to enhance our logging platform, making it both easier to use and more cost-effective - a win-win situation! Of course we could not just tear down everything already in place to replace it with the new shiny services - that would be like replacing the engine of a car while it's still running! We needed to investigate what new services we could use and how to configure them to meet our needs. As part of our exploration of using OpenSearch Serverless for our new log platform we needed to find a solution for our alert system. Currently, we are using the Alerting feature of our OpenSearch cluster, but this feature is unavailable in the serverless instances. Thankfully, as of AWS Managed Grafana version 9.4, the Grafana OpenSearch plugin could use an OpenSearch Serverless instance as a data source (see the Grafana Opensearch plugin page ), so we could use Grafana for our alerting needs! We still needed to figure out how to configure both services so that they could work nicely together. At the current state of our investigation we had already created an OpenSearch Serverless instance and tested log ingestion from all of the source we wanted to use. The remaining task was to set up a test Grafana instance in our Sandbox to use our serverless instance as a data source. At the time of writing this article, the AWS documentation is not explicit on how to do exactly that. As engineers, we often don't have a step-by-step guide for every task. This is when we need to explore and experiment with whatever we are building to see what works. We also asked for help from the AWS Support to narrow down all the necessary permissions, where they had to escalate our request for help to both the Amazon Managed Grafana internal team, and to the OpenSearch team as the documentation does not exist yet. This motivated us to write this article to share the knowledge. A quick self-introduction before continuing: I'm Martin, a Platform Engineer at KINTO Technologies. I joined the team last year and started working with AWS sporadically since then. Working on this project has been a great learning experience for me and I'm excited to share it with you! The biggest takeaway I got from this project is that the AWS Support is a great resource and you should not hesitate to ask for help when you need it. Setting up our environment In this article, we'll set up everything using the AWS Console. You can, of course, use your favorite Infrastructure as Code tools with AWS to build the same configuration. This article assumes you are familiar with the AWS Console and already have an Opensearch Serverless instance running. Please note, the configurations used in this article prioritize simplicity. I strongly recommend reviewing and adjusting these settings to align with your organization's security requirements. Setting up the IAM role Before anything else, we will need to create an IAM role for our Grafana instance to use. If you plan to use other AWS services with your Grafana workspace, it might be better to select the Service managed option when creating the Grafana workspace. You can then update that role created by AWS or provide the ARN of your custom role when setting up the data source in Grafana. Here is the trust policy needed when creating the IAM role: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "grafana.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } You can get the same trust policy by selecting the AWS service Trusted entity type and select AmazonGrafana in the Use case section. Here is the permission policy required for accessing OpenSearch Serverless from Grafana, with special thanks to the AWS Support team for escalating our request to the Grafana and OpenSearch teams to provide us with the minimum necessary permissions.: { "Statement": [ { "Action": [ "es:ESHttpGet", "es:DescribeElasticsearchDomains", "es:ListDomainNames" ], "Effect": "Allow", "Resource": "*" }, { "Action": "es:ESHttpPost", "Effect": "Allow", "Resource": [ "arn:aws:es:*:*:domain/*/_msearch*", "arn:aws:es:*:*:domain/*/_opendistro/_ppl" ] }, { "Action": [ "aoss:ListCollections", "aoss:BatchGetCollection", "aoss:APIAccessAll" ], "Effect": "Allow", "Resource": [ "arn:aws:aoss:<YOUR_REGION>:<YOUR_ACCOUNT>:collection/*" ] } ], "Version": "2012-10-17" } OpenSearch Access Policy On the OpenSearch side, we need to add a Data access policy for our newly created IAM role. Even if we gave our IAM role the necessary permissions to access OpenSearch, we still need to create a Data access policy to allow the IAM role to access the data in the collections. See the AWS documentation for more information. In the serverless section of the OpenSearch Service page menu, select Data access policies, then click on the Create access policy button. Add a name and a description to your access policy, then select JSON as the policy definition method. Use the following policy, courtesy of the Grafana Opensearch Plugin documentation: [ { Rules = [ { ResourceType = "index", Resource = [ "index/<NAME_OF_YOUR_OPENSEARCH_INSTANCE>/*" ], Permission = [ "aoss:DescribeIndex", "aoss:ReadDocument" ] }, { ResourceType = "collection", Resource = [ "collection/<NAME_OF_YOUR_OPENSEARCH_INSTANCE>" ], Permission = [ "aoss:DescribeCollectionItems" ] } ], Principal = [ <GRAFANA_IAM_ARN> ] Description = "Read permissions for Grafana" } ] Update it with the name of your OpenSearch Serverless deployment and the ARN of the IAM role we created earlier. A little bit of Networking Before continuing with the creation of our Grafana instance, we are going to create a few networking resources. First let's create two Subnet in the same VPC as your OpenSearch Serverless deployment. Each subnet should be in a different Availability Zones. Once created, we need to update the Route Table of each subnet to add a new route from 0.0.0.0/0 to an Internet Gateway. Next, create a Security Group accepting Inbound HTTPS traffic from your VPC, and accepting all Outbound traffic on 0.0.0.0/0. With all of this in place, we can now create our Grafana instance! Creating your Grafana Instance Search for the Amazon Managed Grafana service in the Console search bar. On the service's homepage, use the button that the AWS engineer conveniently placed there to create the Grafana workspace. For the first step of the creation page, set the name and description of your Grafana workspace. Set the version to at least 9.4. Version 10.4 is the latest version available so I will be using that. On the next page, for Authentication access, select your preferred authentication method. I'll select AWS IAM Identity Center. In the Permission type section, select Customer managed and select the ARN of the IAM role you created earlier. I had this weird issue where after creating the Grafana workspace, it was using another IAM role than the role I selected so I had to update the workspace to use the correct role. It could be a bug or a misconfiguration on my side. For the sake of this article, we will agree that I definitely selected the correct role and that this was a bug. Ok? Great! In the Outbound VPC connection section, select the same VPC as the one in which your OpenSearch Serverless instance is deployed. For the Mapping and Security Groups, select the subnets and the security group we created earlier. In the Workspace configuration options section, make sure to select Turn plugin management on. For this tutorial, we will section Open Access in the Network access control section. Click on the next button and review your settings. Once the workspace is created, set up your authentification method. I selected AWS IAM Identity Center so I'll simply add my user and make myself admin. You should now be able to connect! Grafana Meets OpenSearch Serverless Before adding our OpenSearch Serverless data source, we need to install the OpenSearch plugin in our Grafana workspace. To do this, follow these steps: In the menu on the left, select Administration, then Plugins and Data, and finally Plugins. On the Plugins page, select All instead of Installed in the field at the top of the page. Search for the OpenSearch plugin and install it. Once installed, you should see an Add new data source button at the top right of the OpenSearch plugin page. Click on it. Next, configure the data source information to connect to your OpenSearch Serverless instance: HTTP Section: Add the URL of your OpenSearch Serverless instance in the URL field. Auth Section: Toggle on SigV4 auth and select the region where your OpenSearch Serverless instance is located. OpenSearch Details Section: Toggle on Serverless and set the index you want to use. Logs Section: Set the name of your message field and level field. Finally, click on Save & test. You should receive a message confirming that you have successfully connected to OpenSearch. You can now use this data source to create alerts and dashboards! Conclusion I hope this article has been helpful and that you can now set up your own Grafana instance with OpenSearch Serverless as a data source. For us at KINTO Technologies, using Grafana for alerting looks like a great choice for our new logging solution. With this setup, we'd have a robust, efficient, and cost-effective logging and alerting solution that meets our specifications. Personally, I find creating alert queries in Grafana to be more straightforward and flexible compared to OpenSearch. By the way, the Platform Group at KINTO Technologies is hiring! We are always looking for talented engineers to join our team. If you're interested in joining our team or want to learn more about what we do and what it's like to work here, please feel free to reach out to us! We have a nice web page with all our job listings here .
アバター
KINTOテクノロジーズで my route(iOS) を開発しているRyommです。 TextKitを使いたい場合など、未だUITextViewは必要になることが多いと思います。 UITextViewをSwiftUIで使えるようにUIViewRepresentableしようとしたところ、高さの調整にハマったので、その解決記事です。 結論 こんな感じでできます。 import UIKit struct TextView: UIViewRepresentable { var text: NSAttributedString func makeCoordinator() -> Coordinator { Coordinator(self) } func makeUIView(context: Context) -> UITextView { let view = UITextView() view.delegate = context.coordinator view.isScrollEnabled = false view.isEditable = false view.isUserInteractionEnabled = false view.isSelectable = false view.backgroundColor = .clear view.textContainer.lineFragmentPadding = 0 view.textContainerInset = .zero return view } func updateUIView(_ uiView: UITextView, context: Context) { uiView.attributedText = text } func sizeThatFits(_ proposal: ProposedViewSize, uiView: UITextView, context: Context) -> CGSize? { guard let width = proposal.width else { return nil } let dimensions = text.boundingRect( with: CGSize(width: width, height: CGFloat.greatestFiniteMagnitude), options: [.usesLineFragmentOrigin, .usesFontLeading], context: nil) return .init(width: width, height: ceil(dimensions.height)) } } extension TextView { final class Coordinator: NSObject, UITextViewDelegate { private var textView: TextView init(_ textView: TextView) { self.textView = textView super.init() } func textView(_ textView: UITextView, shouldChangeTextIn range: NSRange, replacementText text: String) -> Bool { return true } func textViewDidChange(_ textView: UITextView) { self.textView.text = textView.attributedText } } } わかりやすさのために背景色をつけてます 解説 makeUIView() において、 view.isScrollEnabled を  false にすると、改行がされなくなってしまう問題がありました。 setContentHuggingPriority() や setContentCompressionResistancePriority() を使うと、スクロール無効時も改行はされるようになりましたが、垂直方向の表示領域がうまく調整されません。2行以上のテキストを表示する場合、垂直方向の領域を超えた部分は消えてしまっていました。 func makeUIView(context: Context) -> UITextView { let view = UITextView() view.delegate = context.coordinator view.isScrollEnabled = false view.isEditable = false view.isUserInteractionEnabled = false view.isSelectable = true view.backgroundColor = .clear // こんな感じ? view.setContentHuggingPriority(.defaultHigh, for: .vertical) view.setContentHuggingPriority(.defaultHigh, for: .horizontal) view.setContentCompressionResistancePriority(.defaultLow, for: .horizontal) view.setContentCompressionResistancePriority(.required, for: .vertical) view.textContainer.lineFragmentPadding = 0 view.textContainerInset = .zero return view } (・〜・) そこで sizeThatFits() を使います。 これはiOS16から提供された、UIViewRepresentableでオーバーライドできるメソッドです。このメソッドを使うと、提案された親のサイズを使ってViewのサイズを指定することができます。 今回はViewに渡すテキストを NSAttributedString にしたかったため、受け取ったテキストの高さを計算します。 高さの計算方法は こちらの記事 を参考にしました。 func sizeThatFits(_ proposal: ProposedViewSize, uiView: UITextView, context: Context) -> CGSize? { guard let width = proposal.width else { return nil } let dimensions = text.boundingRect( with: CGSize(width: width, height: CGFloat.greatestFiniteMagnitude), options: [.usesLineFragmentOrigin, .usesFontLeading], context: nil) return .init(width: width, height: ceil(dimensions.height)) } これだけだとViewの領域が sizeThatFits() で計算した大きさより大きくなってしまうため、 makeUIView() に以下の2つの設定を入れて余白を消します。 textView.textContainer.lineFragmentPadding = 0 textView.textContainerInset = .zero できました◎ おわりに sizeThatFits() でいい感じに計算すればいいんだ〜というところに辿り着くまでに結構遠回りしたので、記事にしてみました🤓
アバター
​ Hello ( º∀º )/ ​ This is Murayama from the Tech Blog Team and the Budget Control Group! In this article, I’d like to share insights from the organizing staff as we host our company's first external event, the "KTC Meet Up!" You can read another article about this event here, detailing everything from the study session planning to appointing supporting staff, written by the organizer, Kinchan ✍️ The first KINTO Technologies MeetUp! From start to launch ![](/assets/blog/authors/uka/ktc-meet-up/kinchan.png =500x) The Tech Blog team members are actively involved in supporting events! We don’t just manage the Tech Blog; we engage with everyone and help energize the company. That’s the kind of team we are! ('-')ง ​ Looking back to late June 2023, I joined the team because they were planning to host an external event in August. All team members have dual roles and belong to different groups. In my regular duties, I handle financial matters, so I took care of purchasing the necessary supplies and providing support on the event day. Since this was KINTO Technologies' first external event, we made sure to gather all the necessary equipment. ​ Although this event was held offline, we also purchased filming equipment! This means we can host online or hybrid events in the future as well 😳 It's great that our company is proactive in taking on new challenges!! I’ll share some photos from here: ​ Couldn’t wait until the test, so they started using it as soon as it arrived Everyone trying and experimenting together ​ The event day came in a flash! ​ Test Test! ​ This time, we decided to test streaming internally ​ Whoa! ​ We also made T-shirts! Matching! ​ Excited! ​ Let's get started! ​ ​ Celebrating the birthday of our first external event with birthday glasses ​ ​ Our manager! ​ ​ Individuality shining! ​ ​ The Tech Blog team also took the stage! ​ Good job presenting! ​ ​ Group discussions: each table was lively! ​ So busy talking, no time to eat… Said one of our speakers ​ Buzzing ​ ​ Looks fun! ​ ​ Note to self: beer and highballs were popular choices ​ Peace ​ Thumbs up ​ ​ ​ ​ And just like that, the two hours flew by! Here are the “standout moments” from the event, as highlighted by the support staff and organizing team: ​ When the collaborators gathered. The moment when the cleanup was finished. Everyone contributed voluntarily! Hearing "That was awesome!" and "When’s the next one?" as people left. Right after recruiting support staff for the event, everyone started sharing various ideas. The impact of arriving at the event venue on the day The discussions on the day. Everyone was so engaged in conversation! Receiving consultations. I enjoy being involved in such exciting interactions! It ended in a great success! We all worked together explaining how the camera works for example, and being able to accomplish this as One Team. Twitter was buzzing with excitement even before the event started. There was a strong sense of unity! Having many participants ask a variety of questions, and the post-event survey results included comments like, “We'll try this in our team too.” Seeing the organizers also enjoy themselves was fantastic! Having all the equipment perfectly set up on site. When most of the name tags were given out at reception! Colleagues from different specialties and departments working together to make the event a success. The event ran smoothly on the day, and it was wonderful to see everyone thinking about what they could do on their own initiative. ​ The review session was also very fruitful with many opinions exchanged. Everyone's cooperation was visible and made it a wonderful event! Conclusion ​ Stay tuned for an upcoming article from the support staff's perspective and another one featuring case studies presented by the speakers 😗 In the meantime, I'm sharing this article because I captured a lot of great photos and wanted to share them with you 😳 ​ Thank you for reading until the end!
アバター