TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

969

Introduction Hello! I am rioma from KINTO Technologies' Development Support Division. I usually work as a corporate engineer, maintaining and managing the IT systems used throughout the company. We recently held a study session in the form of case study presentations and roundtable discussions, specializing in the corporate IT area under the title " KINTO Technologies MeetUp! - Sharing Four Cases for Information Systems Professionals by Information Systems Professionals ." In this article, I will introduce the content of the case study presented at that study session, along with supplementary information. Why We Chose This Topic We had an idea to host our first ever study session, inviting professionals from outside our organization. Given that we had a project available that could serve as a presentation topic, I proposed focusing on the theme of transitioning to authentication infrastructures. Although the transition was technically from our sister company, KINTO, and their environment -not ours-, I believe I was able to provide a preliminary introduction to KINTO Technologies' corporate IT activities, encompassing our schedule and events occurring at that time. In this article, I will briefly supplement the information presented and provide a reintroduction of the content. Premise "Why was it necessary to switch to a new authentication infrastructure?" At KINTO, there were a lot of minor inconveniences and security issues surrounding the authentication infrastructure. After considering Microsoft Entra ID (formerly Azure AD), it appeared to be the optimal solution, so we decided to proceed with switching to Entra ID. Other reasons for this choice included the fact that KINTO Technologies' authentication infrastructure was already Entra ID, motivating us to implement it with the goal of enhancing collaboration, such as tenant integration, and the fact that we had the Microsoft E3 license but were not making the most of it. There were also significant advantages in terms of cost cutting. On Switching Over There were two considerations when making the switch. The first, to implement access policies which were previously controlled using certificates under similar conditions but without using certificates. The setting is based on conditional access, but as an overview, I was able to implement an access policy that is more robust and flexible than the existing one by combining settings such as "devices registered with MDM" as a condition and blocking any non-matching attribute values. The second is a specification that passwords for all accounts are reset when switching. The specification had quite a strong impact, and I had to find a way to respond to it without affecting all our internal members. To address this, I changed the passwords for all accounts following a forced system reset, based on certain rules. Since the change rules were known beforehand, and detailed procedures after logging in were also deployed, the login process posed no issues, and there were only a few inquiries. *In fact, most people were using PINs instead of passwords to log in, so the announcement about the changed password was not particularly meaningful; instead, it led to a bit of confusion. Trouble Surrounding Switching Work While it is one of the most common cases in the development of procedure manuals, I received numerous comments that the explanation of the procedure manual and work outline was difficult to understand. This was almost 100% my fault. I was overly focused on explaining the risks and effects of such a big-impact work as switching authorization infrastructures, choosing words and explanations that proved challenging for those less familiar with such systems. As mentioned above, I only realized just before the switchover that one of the PC login patterns after the switchover was to log in with PINs. The login instruction that had been developed in advance of the switchover did not mention this at all. This created a puzzling situation for those accustomed to logging in with PINs. Fortunately, I managed to fix the materials on the last day and the day of the switchover, but the repeated revisions and reissues were time consuming and confusing. In addition, I noticed a design error during the switchover, and had to take the slightly unreasonable response of implementing the switchover procedure while correcting the design and procedure manual. I was relieved that the issue was not a fundamental part of the project. Please see the attached slides as there are more issues. Conclusion While there were some minor inquiries and suggestions, no critical issues such as extended downtime preventing work for extended periods occurred. Consequently, the switchover of the authentication infrastructure was successfully implemented. Later, I was very happy to hear from internal staff who said, "It's amazing that there were few inquiries or business impact despite the scale of the transition." I will continue to implement system changes/transitions in the future, and aim to better our internal environment based on this experience.
アバター
Introduction Hello, I am Ki-yuno, in charge of front-end development for KINTO FACTORY. In this article, I would like to outline the process of setting up a debugging environment for React projects in Visual Studio Code (hereafter referred to as VS Code). I have only used VS Code as super notepad, so I struggled with it quite a bit (mainly due to the language barrier). For those of you who are about to build a debugging environment in VS Code, feel free to explore beyond the challenges I faced. Good luck! Environment Information OS : macOS Sonoma 14.1.2 VS Code : ver1.85.1 Node.js : 18.15.0 terminal : zsh Development Procedure 1. Setup launch.json Add launch.json to build a debugger launch configuration. Select the "Run and Debug" in the left side menu of VS Code. Clicking "Create a launch.json file" after making your selection will create a launch.json file in your project. :::message When setting it up for the first time, the debugger need to be selected. I will select Node.js since this is for React. ::: Creating a launch.json file Immediately after creation, a default launch configuration is added to launch.json . A default launch configuration has been added immediately after auto-generation 2. Add a New Launch Configuration Add a debugger launch configuration to the launch.json you just added. The launch configuration to be added is as follows: { "name": "[localhost]Chromedebug", "type": "node-terminal", "request": "launch", "command": "npm run dev", "serverReadyAction": { "pattern": "started server on .+, url: (https?://.+)", "uriFormat": "%s", "action": "debugWithChrome" }, "sourceMaps":true, "trace": true, "sourceMapPathOverrides": { "webpack:///./*": "${webRoot}/src/*" } } The edited launch.json will look like the image below. I deleted the default launch configuration, but there is no problem to leave it as is. You can also edit the command property if you want to change the command at startup, or add tasks with the preLaunchTask property if you want to run multiple commands when debugging (I'll not go into details about this this time). name The property value will be the name of the launch configuration 3. Start Debugging All you have to do is press F5 and the debugger will start. When the debugger is started successfully, a debugging button will appear at the top center. You can use the debugging button at the top center or the F key to execute actions such as step in, step over, etc. When in Trouble Here are the specific challenges that I encountered. I hope this helps those who have run into similar issues. ◽ The debugging terminal displays ‘sh-3.2$’, and upon execution, it returns ‘npm command not found’ Restarting VS Code solves this problem. Apparently this occurs when Microsoft auto-launches VS Code. In my environment, I log into Microsoft 365 when my PC starts, and I encountered this problem when VS Code auto-launched upon login. ◽ Despite npm being installed, it continues to return 'npm command not found' even when debugging starts Add the following to the .vscode/settings.json file. If you have not created the file itself, please do so first. { // npm scripts Path setting for executing commands "terminal.integrated.profiles.osx": { "zsh": { "path": "/bin/zsh", "args": ["-l", "-i"] } } } :::message If the terminal execution environment is bash, change the property name and path from zsh to bash ::: This enables the path to pass through to the debugger's execution terminal so that npm commands can be executed. Summary In this article, I summarized how to build an environment for debugging React projects in VS Code. I think we can still play around more make our VS Code workflow more efficient, so I hope next time we can further improve it with tasks.json . In my opinion, being able to debug during development significantly increases QOL (quality of life) and boosts development productivity. As an additional effect, this could contribute to a more positive atmosphere in the office and increased the level of happiness during commuting... maybe? Best wishes for your debugging life. Thank you for reading! Lastly, KINTO FACTORY, where I belong, is looking for people to work with. If you are interested, please check out the job openings below! @ card @ card
アバター
はじめに こんにちは、テックブログをお読みいただいてるみなさん。 最近、Marketing Cloudを導入することになり、その中で「 のりかえGO メール配信」を行うために、単発のメール送信よりも、Journeyを作成して自動化処理をトリガーにしたいと考えています。 Journeyとは、顧客がある一定の行動を起こした際に、自動的に複数のマーケティング手法を展開する仕組みです。例えば、顧客がメール内の特定のリンクをクリックしたら、その後に関連する情報を自動的に配信するなど、自動化されたマーケティングの一環です。 残念ながら、Automation StudioでJourneyをアクティビティとして追加する方法が見つからず、悩んでいました。そこで、いろいろ調査した結果をブログにまとめてみました。 メール配信の相棒 Journey Builderを使用する理由はいくつかあります: ・分岐、ランダム性、エンゲージメントの活用が可能 ・Salesforceとの連携が可能。例えば、タスクやケースの作成、オブジェクトの更新など ただし、Journey BuilderではスクリプトやSQLクエリの実行ができません。例えば、大量のメールを送信する前に同期済みデータソースをマージする必要がある場合などです。そのような場合は、Automation Studioでこれらのアクティビティが完了した後にJourney Builderを呼び出す必要があります。 したがって、Automation StudioとJourney Builderを統合して、メールを配信するのが望ましいですね。その設定方法を一緒に見ていきましょう。 設定まわり Automationを作成して、開始ソースとしてスケジュールを追加します。スケジュールを将来の時間に設定して保存します。保存しないと後続設定がうまく出来ないので、忘れないでくださいね。 アクティビティ(SQLクエリやフィルターなど)を追加します。これはJourneyと連携するにあたって必須です。データエクステンションが選択されていない場合、Journeyをトリガーすることができません。 Journeyを作成します。エントリーソースとしてデータエクステンションを追加します。ステップ2で使用されているデータエクステンションを選択します。これが大事です。違うデータエクステンションを選択してしまうと、ステップ1のAutomationと連携できません。 ※この時点では、Journeyを保存してAutomationに戻っても、アクティビティからJourneyを選択できません。なぜなら、Automationのアクティビティには「Journey」という選択肢が存在しないからです。焦らずにお待ちください。今から、マジックをお見せします! ![Step3-2](/assets/blog/authors/Robb/20240319/03-2.png =300x) Journeyで、キャンバスの下に表示されている「スケジュール」をクリックして、スケジュールの種別で「オートメーション」を選択した後、「選択」をクリックします。「オートメーション」が非活性で選べない?ステップ1に戻ってAutomationを保存してみたら? 「スケジュールのサマリー」で「スケジュールの設定」をクリックして、ステップ1で作成したAutomationを選択します。 連絡先の評価を編集して、Journeyで処理するレコードを指定します。 メールやフローコントロールなどを追加します。 準備完了!Journeyを検証して、アクティベートしましょう。ここでアクティベートしてもすぐにメールを飛ばさないので、心配はいりません。送信タイミングはAutomationに依存していますからね。 Automationに戻って、Journeyが勝手にAutomationに追加されましたよね。凄いと思わないですか? 最後に、Automationを思い切ってアクティブしましょう。Automationがトリガーされるたびに、Journeyもトリガーされるようになっています! お疲れ様です。ここで、コーヒーを飲みながら一息入れることにします。皆さんもお気に入りの飲み物でリフレッシュして、メール自動配信の楽しみを味わってみてくださいね。 それでは、良いマーケティングを! 出典: https://www.softwebsolutions.com/resources/salesforce-integration-with-marketing-automation.html
アバター
初めに こんにちは、グローバル開発グループでID Platformを開発しているリョウです。2024/01/19に渋谷ストリームホールで OpenID Summit に参加しましたので、感想と自分が見つけた面白かったポイントを共有する為に本記事を執筆いたします。 コロナ禍でOpenID Summit Tokyo 2020から4年ぶりに開催されたので、当日はOpenIDに興味ある方々が大勢会場に集まり、非常に盛り上がっていました。 今回のトピックとしては、コロナを挟んだ4年間でデジタルIDがどんな変化をもたらしたか、そして将来的にデジタルアイデンティティが世の中でどんな発展を遂げそうか、といった内容でした。 プログラムの流れ 参加感想 OpenIDといえば、他の汎用性の高い技術領域と比べて知名度はあまりなく、聞いたことも無い方々が沢山いらっしゃると思いますが、今回のサミット会場に着いた際、多くの企業からOpenIDに関心のある方が集まっていたので、若干驚きました。 今回初めて参加する方もいるので、午前中は今までOpenIDが発展してきた歴史、OpenIDのデジタルアイデンティティと電子マネー面における今後の発展や展望、OpenIDファウンデーション・ジャパンワーキンググループにて行われている資料の翻訳や人材育成活動の詳細を中心にご紹介いただきました。 午前中発表された内容から、今後のOpenIDが発展していく方向は認証認可領域から身分証明(デジタルアイデンティティ)と電子マネー運用への変化だと深く感じました。 午後のTopicはOpenIDを実際導入運用の段階遭った問題や全体的なソリューション・対策に関して各会社から説明をいただきました。 ただ、午後のプログラムは2会場での開催だったため、 NARUTO のような多重分身術を持たない私は、同時に2会場に参加出来なく非常に残念です。 印象深い発表 OpenID Connect活用時のなりすまし攻撃対策の検討 発表者: 湯浅 潤樹 — 奈良先端科学技術大学 サイバーレジリエンス構成学研究室 紹介いただいた事例はなかなかレアですが、修士課程2年でOpenIDの運用実験をここまで深掘りされていることに驚きました。OpenIDの認証モードはいくつかありますが、使用例によっては安全性が低い場合もありますので、この発表のような特定ケースでリスクがある部分を今後の展開で注意しながら進めていくべきだと思いました。 メルカリアプリのアクセストークン 発表者: グエン・ザー — 株式会社メルカリ IDプラットフォームチーム ソフトウェアエンジニア メルカリアプリ は日本で非常に人気のある中古品販売プラットフォームとして有名です。そのIDプラットフォームエンジニアの方が、メルカリIDの運用上、古い手法でどんな困難があったのか、そしてユーザーがモバイルアプリやブラウザ上でスムーズにサービスを利用できるようになるまでの努力について説明いただきました。その中でも、ブラウザCookieの活用において多くの目標を達成してきたけれど、 Chrome ブラウザだけは特別な仕様で、 Cookie有効期間が400日 であることを知りました。我々もIDプラットフォームを立ち上げてからUI・UXに関しては様々なチャレンジや努力をしてきましたが、 Cookie有効期間は400日 といったことは初めて知りました。 JWTについて JWT(JSON Web Token)という名前はよく耳にするかと思いますが、認証・認可にあまり触れあわない方だとJWTの役割や、よく同時に出るJWK, JWS, JWE間の相互関係について触れる機会が無いかもしれないので、あらかじめ簡単に説明しましょう: JWTはネットワーク上で情報交換する際に、交換情報の信頼度を確保出来る 規範 となります。そのうち、JWSとJWEはJWTの規範の実現例となります。 JWS(JSON Web Signature)は下記図のように「.」に区切られて3つのパートとなります:**Header(認証方法記載),Payload(実際の情報),Signature(改竄されない保証)**。 JWSをbase64でエンコードされましたので、デコード出来たPayloadはすべての情報が開示されてます。 JWK(Json Web Key)は、JWTの説明と合わせてすると、JWTのHeaderに記載して方法でPayloadの内容のハッシュを暗号化してSignatureを生成する暗号化鍵となります。 JWE(JSON Web Encryption)は上記のJWSとクレべて、安全性と完全性を当時に守れるJWTの実現となります。そのため「.」で5つパートに分割されました、二番目はPayload専用の暗号が復号化鍵となります。暗号化鍵の所有者以外Payloadの内容を復号化は出来ません。 画像引用元 SD-JWT 今回のOpenIDサミットにてイタリアの電子マネー導入運用の実績を紹介いただきました。そこで SD-JWT という新概念について説明がありました。私自身初耳だったので、サミットが終了してから自分でも検索してみました。ここからがいよいよ本記事の主題です。私が検索したSD-JWTについて簡単に説明したいと思います。 Selective Disclosure JWT(SD-JWT)は名前の通り、選択的に開示するJWTというものです。JWSとJWEがすでに世の中に存在しているのに、どんな経緯でSD-JWTが設計されたのかを最初に説明します。 Payloadの内容開示については、現在以下の二つが存在します: 全部開示 :JWSをbase64で解析して、Payloadにあるすべての内容を誰でも見ることができます。 全部非開示:JWEの場合は復号化キーを持っている所有者以外、JWEのPayloadの内容を見れません。 ただ、一部の情報しか公開したくない場合には対応する案がありません。そのため、SD-JWTが世に生まれました。 例えば、電子ウォレットの所有者が10万円の商品を購入する際、商品の販売者が購入者の認証に使う一般的な属性情報(誕生日、住所、電話番号など)を見る必要がなく、購入者の電子ウォレット残高だけを見たい場合。購入者としても、すべての個人情報を全部公開せずに、残高やIDなどの必須情報だけを開示すれば購入できます。 これだけでは足りないかもしれませんが、JWSから所有者の必要な情報だけを事業者へ開示することで、ある程度は個人情報漏えい防止にも有効になります。 SD-JWT実現方法 従来のID token生成手順から着手します。 まずは、あるユーザーAさんの個人情報を以下のようにJSON形式で表示します。 { "sub": "cd48414c-381a-4b50-a935-858c1012daf0", "given_name": "jun", "family_name": "liang", "email": "jun.liang@example.com", "phone_number": "+81-080-123-4567", "address": { "street_address": "123-456", "locality": "shibuya", "region": "Tokyo", "country": "JP" }, "birthdate": "1989-01-01" } そして、発行者はそれぞれの属性情報に対してSD-JWT Salt(ランダム値)を付与します。 { "sd_release": { "sub": "[\"2GLC42sKQveCfGfryNRN9c\", \"cd48414c-381a-4b50-a935-858c1012daf0\"]", "given_name": "[\"eluV5Og3gSNII8EYnsxC_B\", \"jun\"]", "family_name": "[\"6Ij7tM-a5iVPGboS5tmvEA\", \"liang\"]", "email": "[\"eI8ZWm9QnKPpNPeNen3dhQ\", \"jun.liang@example.com\"]", "phone_number": "[\"Qg_O64zqAxe412a108iroA\", \"+81-080-123-4567\"]", "address": "[\"AJx-095VPrpTtM4QMOqROA\", {\"street_address\": \"123-456\", \"locality\": \"shibuya\", \"region\": \"Tokyo\", \"country\": \"JP\"}]", "birthdate": "[\"Pc33CK2LchcU_lHggv_ufQ\", \"1989-01-01\"]" } } 「_sd_alg」に記載されたハッシュ関数で「sd_release」のそれぞれの属性情報を計算して下記の「_sd」へ格納し、かつ発行者の署名鍵(cnf)、有効期間(ext)、発行時間(iat)を加えて新しいPayloadを作成できます。 Payloadに基づいて最新のTokenを発行すると、SD-JWTが作成できました。 { "kid": "tLD9eT6t2cvfFbpgL0o5j/OooTotmvRIw9kGXREjC7U=", "alg": "RS256" }. { "_sd": [ "5nXy0Z3QiEba1V1lJzeKhAOGQXFlKLIWCLlhf_O-cmo", "9gZhHAhV7LZnOFZq_q7Fh8rzdqrrNM-hRWsVOlW3nuw", "S-JPBSkvqliFv1__thuXt3IzX5B_ZXm4W2qs4BoNFrA", "bviw7pWAkbzI078ZNVa_eMZvk0tdPa5w2o9R3Zycjo4", "o-LBCDrFF6tC9ew1vAlUmw6Y30CHZF5jOUFhpx5mogI", "pzkHIM9sv7oZH6YKDsRqNgFGLpEKIj3c5G6UKaTsAjQ", "rnAzCT6DTy4TsX9QCDv2wwAE4Ze20uRigtVNQkA52X0" ], "iss": "https://example.com/issuer", "iat": 1706075413, "exp": 1735689661, "_sd_alg": "sha-256", "cnf": { "jwk": { "kty": "EC", "crv": "P-256", "x": "SVqB4JcUD6lsfvqMr-OKUNUphdNn64Eay60978ZlL74", "y": "lf0u0pMj4lGAzZix5u4Cm5CMQIgMNpkwy163wtKYVKI", "d": "0g5vAEKzugrXaRbgKG0Tj2qJ5lMP4Bezds1_sTybkfk" } } }. { シグネチャー 発行者は公開鍵でPayloadのシグネチャーを計算してここに置く、Payloadの内容を改竄されない事が保証できる } 「sd_release」と「_sd」それぞれの属性とハッシュ値の順番は守らなくても大丈夫です。 SD-JWT利用方法 発行者はSD-JWTと「sd_release」を一緒に所有者へ送ります。 所有者は使う場面によって、開示したい属性情報とSD-JWTを同時に提出すると、安全性と完全性を守る前提で認証ができます。 "email": "[\"eI8ZWm9QnKPpNPeNen3dhQ\", \"jun.liang@example.com\"]", メールアドレスだけを開示したいのであれば、上記の部分とSD-JWTを一緒に提出することになります。 検証者は以下の2点を確認することでメールの正確性が確認できます  emailの部分をハッシュ関数で計算する結果と「_sd」一覧にある "5nXy0Z3QiEba1V1lJzeKhAOGQXFlKLIWCLlhf_O-cmo" が一致すること PayloadのSignature再計算結果とSD-JWTに有るSignatureが一致すること(Payloadが改ざんされていない) まとめ 今回のサミットへ参加することによってOpenIDが発展してきた歴史や、今後の展開について理解できました。また、これまで我々IDチームが使っていたJWTと違う形式であるSD-JWTについても知ることができました。 面白い話がたくさんありましたので、普段はID分野に携わっていない方々へもぜひご参加をおすすめします。 今後、KINTOテクノロジーズとしても登壇できるようになることを心待ちにしています。 レファレンス OpenID Summit Tokyo 2024
アバター
Introduction This article is about the creation process behind our mascot in Japan. This initial phase recounts the journey since we received the character creation request until we gave it form. Hello, I am Sugimoto from the Creative Office. To explain our team briefly, the Creative Office is in charge of overseeing the communication with customers of the KINTO Japan vehicle subscription service, and using their feedback for planning and generating outputs. To elaborate a little, we are in charge of understanding the communication issues existing on the project side (We have the power to do this because we develop in-house!), coming up with tangible solutions through them, and communicating all messages from the business side in a consistent manner (= branding). With the above in mind, I will talk about how we created the KINTO mascot by us leveraging our circumstances and knowledge. Keep in mind that this of course is not a guide on how to make cool characters. I hope you read this with a light heart, as a mother would with her child’s health journal, or maybe more like reading your illustrated diary when you were in elementary school, with your exciting summer vacation project. Embarking on the Character's Origin Story The creation of our official mascot kicked off in November 2021. I was the Creative Director (CD), another team member of the Creative Office was the Art Director (AD), and we were joined by members of the PR and Branding Team from the Marketing Planning Division. Together, we formed the “mascot character project” (hereinafter referred to as PJ). While calling it a project might sound grandiose, our intention was to start small and grow it gradually. Our aim was to appeal to young people and women to counteract the trend of them falling out of love with vehicles. Initially, it was not conceived as corporate branding project with corporate commercials and posters right off the bat. That said, all of the members were eager to create and raise their own “child” who would be loved by everyone. Although it was not an official one, we already had an illustration of a character on the service's website. Everyone in the company called him Kuroko-kun. Kuroko in Japanese refers to stage assistants in traditional theater plays. First, we had a meeting to discuss how to use Kuroko-kun. Kuroko-kun would appear throughout the KINTO website and quietly support people who are thinking about owning a vehicle or have concerns about driving, but does so from behind the scenes. Although we were attached to Kuroko-kun, many of us thought that it would be better to start from scratch and create a new KINTO character, so the new goal of the project shifted to being about creating one. Kuroko-kun Works Hard! Where Did We Start? First, we decided to make our purpose clear when developing our new character. [Aim: To turn consumers into fans of KINTO]. We wanted people who have never purchased a vehicle or are not yet interested in vehicles to learn how enjoyable having a car can be. We thought it would be best to have a character to accompany you in the joy of driving, rather than a character that teaches you about driving. Instead of going straight to an illustrator, we decided to let employees get involved and come up with their own character ideas. This had the added benefit of working as internal branding. We expected the engineers and business team members who were involved in creating the service to come up with rich ideas from an inside perspective of the company, and we were excited and nervous about the feedback we would get. During this proposal period, the team came up with character ideas to create a basis for characters to pick. For these ideas we gathered, we decided on the direction of the character using the below three steps. 1. Check that the character follows an archetype This was the first step of the output process, and the Creative Office took the lead. KINTO Japan has its brand personality, which I will talk about another time. It personifies the brand and describes what elements and personality it has (an archetype). The personality elements that make up the KINTO brand in Japan, or KINTO-san if you will, are the Explorer which seeks a free driving style, the Every Person which is familiar and has the ability to empathize with others, the humorous Jester, and the Sage that gives others specialized knowledge. At first it may sound all over the place, but it all comes together if you see KINTO's personality as someone curious, who likes to entertain people around them, and wants to be useful with their knowledge. Of course, the new character would be a major representation of the brand, so we decided to make sure its concept matched its personality. 2. Character characteristics and motifs We organized our character ideas into “roles/attributes” and “motifs”, and we extracted the characteristics. These are some of the ideas 3. Making character ideas that could solve our problem We divided the characteristics and motifs we organized in point 2. into four directions. A: A "character that represents the very DNA of KINTO" that represents the fun and freedom of driving and tells the brand story B: An approachable "character that represents friendliness toward drivers" C: A "character that symbolizes the elements of freedom and moving forward" D: A “character that embodies innovation and intelligence” Lovable Characters Gathered Throughout the Company We put up posters throughout the company asking for submissions. While the PJ members discussed the character concept, we collected 24 character proposals from volunteers from October to November 2022. During the screening, we did a survey throughout the company asking, "Which character do you like and think fits KINTO?" Incidentally, during the process of making the mascot character, we wanted to use as many of the KINTO and KTC employees' hopes and wishes as we could, and we collected a good amount of survey responses. Thinking about it now, I think the employees were already starting to get attached to the characters at the time. Getting back on topic, the results of the initial survey showed that the idea with the "cloud" motif was popular, and clouds were also related to the origin of the company's name, KINTO. So, we decided to go in the direction of a cloud character. From there, we compared the opinions of the judges (the project members) with the responses to the survey, found similarities and differences, and summarized what elements to add to the character. This was the core part of deciding what kind of appearance and characteristics we wanted our child to have. I will end today's article here. In the next article I will tell you more on how our character took shape!
アバター
はじめに こんにちは。 KINTOテクノロジーズ モバイルアプリ開発グループの中口です。 KINTOかんたん申し込みアプリ のiOSチーム(以下、iOSチーム)でチームリーダーをやっています。 不定期に振り返り会を行うのですが、振り返り会ってすごく難しいですよね。。。 みんなの本音を引き出せているんだろうか?? チームが抱える本当の課題はなんだろう?? 自分のファシリテーションがうまくできているだろうか?? 理由を挙げ出せばキリがないです。 先日 クラスメソッド社が開催したこちらのwebセミナー を視聴したのですが、 その中で「自走するチームをどのように作るか」というセッションがあり非常に感銘を受けました。 セッション内で紹介されたクラスメソッド社が提供する振り返り会の体験会をぜひ受けてみたいと思い申し込んでみました。 本記事ではその様子をご紹介いたします。 事前ヒアリング 振り返り会実施の前にクラスメソッド阿部様、高柳様とお打ち合わせをさせていただきました。 チーム状況に最適な振り返り会を実施するためiOSチームの現状を1時間近くヒアリングいただきました。 振り返り会の全体像 振り返り会当日は高柳様、伊藤様の2名にお越しいただきファシリテーションをしていただきました。 約2時間の振り返りなのですが会の大まかな流れとして、 参加者全員の自己紹介 振り返り会の目的をみんなで認識合わせ 「チームをもう少し良くするために」を個人ワークで考える 同じ内容をペアワークで考える チーム全体共有 具体的なアクションプランをペアワークで考える チーム全体共有 クロージング という流れで進みました。 前半戦 約2時間の振り返りでしたが、注目すべきはおよそ半分の時間を 「1. 参加者全員の自己紹介」と「2.振り返り会の目的をみんなで認識合わせ」に使ったことでした。 「1. 参加者全員の自己紹介」ではファシリテーターから【名前やニックネーム】、【チームでの役割】、【このメンバーで会話が多い、または少ない人】などの質問を受けました。 そこでチームの雰囲気や各個人の性格を見るだけで無く、メンバー間の関係性や相性を見極められていたようです。 「2.振り返り会の目的をみんなで認識合わせ」に関しては、 私の方からご要望を出していた今のチームを もう少し良くする ためには何ができるか、という内容でチーム合意を得ました。 現在のチームは、大きなリリースを昨年9月に終え現在は機能改善やリファクタリングなどのタスクが中心のため、状況が落ち着いているのですが、こういったチームが もう少し良くする ということを実践することは結構難しかったりするようです。 また、この会の主催者(私)が参加者にどういった目的(役割や期待していること)でこの会に招待しているか、を 一人一人に 伝えました。 こうすることで、参加者は自分が何を発言したらいいのか明確になり発言しやすくなる、という効果があるようです。 私自身も普段なかなかタイミングが無かったり、直接言うことが照れくさいような話を伝えることができ良い機会だったと思います。 この、前半戦に時間を割くことで会の参加者全員が発言をしやすい雰囲気が作られ、ラポール形成が大きく進んだと実感しました。 ファシリテーションの様子 後半戦 「3. 「チームをもう少し良くするために」を個人ワークで考える」以降はワークで進みます。 ただし、いわゆる振り返りのフレームワークなどは使わず、 どんなことがあればチームを もう少し良くする ことができるだろうを付箋に書き出す、 というシンプルな作業を繰り返しました。 個人ワークを行い、次に2人1組のペアワークを行いました。 ペアワークが向いているケースと向いていないケースがあるようで、このチームはペアワークが向いているとのことです。 また、どういった2人1組の組み合わせにするかは重要であり心理的な負担が発生してしまうような組み合わせは作らないようにすることがポイントだそうです。 ペアワークの様子 その後、各自発表するのですが、これまで私が行ってきた振り返り会では引き出せないような意見がたくさん出ました。 前半戦でのラポール形成やペアワークによって引き出すことができたと実感しました。 そして、ここまで出た意見をもとに具体的にどういったアクションに落とし込むかを 「6. 具体的なアクションプランをペアワークで考える」で行い再度発表しました。 発表の様子 結果として実践することが決まったアクションとしては下記となりました。 雑談チャンネルを作る みんなが自由な雑談できる場 週に一度雑談に特化したミーティングを設ける 個人的な話をすることで、より信頼構築が進む可能性があると思いパブリックではなくプライベートチャンネルを作る ミーティングはなるべく会議室に来る(出社していても自席からオンライン参加の人が多かったため) 担当タスクに関して方針相談会を設ける タスクチケットに期限を明記する これらに関しては、早速翌日からできることに取り組んでいます。 クロージング 会の最後には、会議の時間配分や人の特徴を理解した上で意見を振るなどの会議をデザインすることの重要性を高柳様からお話をいただきました。 特に今回の振り返り会では、途中でペアワークを多用したなど 人 にフォーカスしたことでそのような進め方を実践されたとのことでした。 クロージングの様子 実施後アンケートの結果 実施後アンケートを行いまして、そのサマリーを掲載いたします。 (回答数は10名です) 期待値の変化 実施前:6.3→実施後:9 NPS 80 (NPSとは?) 「”参加した後”の満足度について、その理由を教えてください(フリーテキスト)」のAI要約 アンケートの結果から、参加者は会議の進行やファシリテーターの解説に満足していることがわかります。 また、具体的なアクションを決定し、それが次のアクションにつながったことに対する肯定的な意見が多く見られました。 さらに、チームメンバーの思考を理解する機会があったことや、普段聞けない話が聞けたことも評価されています。 これらの結果から、この会議は参加者にとって有意義な時間であったと言えます。 0を超えることだけでもすごいNPSが脅威の80でした! 感想 今回の振り返り会を通して、コミュニケーション不足を感じているメンバーが多かったことに気づくことができ、そこにフォーカスしたネクストアクションへ落とし込むことができたため大変充実した振り返り会となりました。アンケート結果から参加メンバーも満足していたことがわかり嬉しかったです。 また、会議のファシリテーターというのはものすごく重要な役割なんだなということをまざまざと実感しました。 このスキルは一朝一夕では身につかないとても高度なスキルであるとともに、組織としてこういった人材の育成・獲得には力を入れるべきだと思いました。 まずは私自身がファシリテーションの勉強していき、より良い会議が実施できるようになりたいと思います。
アバター
はじめに こんにちは。 KINTOテクノロジーズ モバイルアプリ開発グループの中口です。 KINTOかんたん申し込みアプリ のiOSチーム(以下、iOSチーム)でチームリーダーをやっています。 不定期に振り返り会を行うのですが、振り返り会ってすごく難しいですよね。。。 みんなの本音を引き出せているんだろうか?? チームが抱える本当の課題はなんだろう?? 自分のファシリテーションがうまくできているだろうか?? 理由を挙げ出せばキリがないです。 先日 クラスメソッド社が開催したこちらのwebセミナー を視聴したのですが、 その中で「自走するチームをどのように作るか」というセッションがあり非常に感銘を受けました。 セッション内で紹介されたクラスメソッド社が提供する振り返り会の体験会をぜひ受けてみたいと思い申し込んでみました。 本記事ではその様子をご紹介いたします。 事前ヒアリング 振り返り会実施の前にクラスメソッド阿部様、高柳様とお打ち合わせをさせていただきました。 チーム状況に最適な振り返り会を実施するためiOSチームの現状を1時間近くヒアリングいただきました。 振り返り会の全体像 振り返り会当日は高柳様、伊藤様の2名にお越しいただきファシリテーションをしていただきました。 約2時間の振り返りなのですが会の大まかな流れとして、 参加者全員の自己紹介 振り返り会の目的をみんなで認識合わせ 「チームをもう少し良くするために」を個人ワークで考える 同じ内容をペアワークで考える チーム全体共有 具体的なアクションプランをペアワークで考える チーム全体共有 クロージング という流れで進みました。 前半戦 約2時間の振り返りでしたが、注目すべきはおよそ半分の時間を 「1. 参加者全員の自己紹介」と「2.振り返り会の目的をみんなで認識合わせ」に使ったことでした。 「1. 参加者全員の自己紹介」ではファシリテーターから【名前やニックネーム】、【チームでの役割】、【このメンバーで会話が多い、または少ない人】などの質問を受けました。 そこでチームの雰囲気や各個人の性格を見るだけで無く、メンバー間の関係性や相性を見極められていたようです。 「2.振り返り会の目的をみんなで認識合わせ」に関しては、 私の方からご要望を出していた今のチームを もう少し良くする ためには何ができるか、という内容でチーム合意を得ました。 現在のチームは、大きなリリースを昨年9月に終え現在は機能改善やリファクタリングなどのタスクが中心のため、状況が落ち着いているのですが、こういったチームが もう少し良くする ということを実践することは結構難しかったりするようです。 また、この会の主催者(私)が参加者にどういった目的(役割や期待していること)でこの会に招待しているか、を 一人一人に 伝えました。 こうすることで、参加者は自分が何を発言したらいいのか明確になり発言しやすくなる、という効果があるようです。 私自身も普段なかなかタイミングが無かったり、直接言うことが照れくさいような話を伝えることができ良い機会だったと思います。 この、前半戦に時間を割くことで会の参加者全員が発言をしやすい雰囲気が作られ、ラポール形成が大きく進んだと実感しました。 ファシリテーションの様子 後半戦 「3. 「チームをもう少し良くするために」を個人ワークで考える」以降はワークで進みます。 ただし、いわゆる振り返りのフレームワークなどは使わず、 どんなことがあればチームを もう少し良くする ことができるだろうを付箋に書き出す、 というシンプルな作業を繰り返しました。 個人ワークを行い、次に2人1組のペアワークを行いました。 ペアワークが向いているケースと向いていないケースがあるようで、このチームはペアワークが向いているとのことです。 また、どういった2人1組の組み合わせにするかは重要であり心理的な負担が発生してしまうような組み合わせは作らないようにすることがポイントだそうです。 ペアワークの様子 その後、各自発表するのですが、これまで私が行ってきた振り返り会では引き出せないような意見がたくさん出ました。 前半戦でのラポール形成やペアワークによって引き出すことができたと実感しました。 そして、ここまで出た意見をもとに具体的にどういったアクションに落とし込むかを 「6. 具体的なアクションプランをペアワークで考える」で行い再度発表しました。 発表の様子 結果として実践することが決まったアクションとしては下記となりました。 雑談チャンネルを作る みんなが自由な雑談できる場 週に一度雑談に特化したミーティングを設ける 個人的な話をすることで、より信頼構築が進む可能性があると思いパブリックではなくプライベートチャンネルを作る ミーティングはなるべく会議室に来る(出社していても自席からオンライン参加の人が多かったため) 担当タスクに関して方針相談会を設ける タスクチケットに期限を明記する これらに関しては、早速翌日からできることに取り組んでいます。 クロージング 会の最後には、会議の時間配分や人の特徴を理解した上で意見を振るなどの会議をデザインすることの重要性を高柳様からお話をいただきました。 特に今回の振り返り会では、途中でペアワークを多用したなど 人 にフォーカスしたことでそのような進め方を実践されたとのことでした。 クロージングの様子 実施後アンケートの結果 実施後アンケートを行いまして、そのサマリーを掲載いたします。 (回答数は10名です) 期待値の変化 実施前:6.3→実施後:9 NPS 80 (NPSとは?) 「”参加した後”の満足度について、その理由を教えてください(フリーテキスト)」のAI要約 アンケートの結果から、参加者は会議の進行やファシリテーターの解説に満足していることがわかります。 また、具体的なアクションを決定し、それが次のアクションにつながったことに対する肯定的な意見が多く見られました。 さらに、チームメンバーの思考を理解する機会があったことや、普段聞けない話が聞けたことも評価されています。 これらの結果から、この会議は参加者にとって有意義な時間であったと言えます。 0を超えることだけでもすごいNPSが脅威の80でした! 感想 今回の振り返り会を通して、コミュニケーション不足を感じているメンバーが多かったことに気づくことができ、そこにフォーカスしたネクストアクションへ落とし込むことができたため大変充実した振り返り会となりました。アンケート結果から参加メンバーも満足していたことがわかり嬉しかったです。 また、会議のファシリテーターというのはものすごく重要な役割なんだなということをまざまざと実感しました。 このスキルは一朝一夕では身につかないとても高度なスキルであるとともに、組織としてこういった人材の育成・獲得には力を入れるべきだと思いました。 まずは私自身がファシリテーションの勉強していき、より良い会議が実施できるようになりたいと思います。
アバター
Self-introduction Nice to meet you! I am Romie, developing the Android version of the my route app at the Mobile App Development Group. It's been two years since I began Android development during my previous job, where I mainly implemented all layouts in xml format, even for personal development. I must admit, I feel a bit embarrassed to say that I only started delving into Compose properly after joining KINTO Technologies in December 2023. And this is also my first time writing an article on this Tech Blog! Target Audience of This Article This article is intended: for absolute beginners in Android development for those who have only written layout design in xml format for any reasons, and have no prior knowledge about Compose for those who are having troubles displaying corrected components while testing on actual devices My Encounter with Preview The first time I joined this company I did not know anything about Compose, so I had to re-implement the following screen from xml format to Compose. ![About my route screen](/assets/blog/authors/romie/2024-02-08-compose-preview-beginner/03.png =200x) About my route screen As soon as I started code reading, I found a mysterious function named Preview. @Preview @Composable private fun PreviewAccountCenter() { SampleAppTheme { AccountCenter() } } The aim of this function was to display the preview of the account center button. However, since the function was not called anywhere in the same .kt file despite being a private function, I assumed that it was not used and decided to proceed with the implementation without making changes to any Preview-related code. After successfully completing the Compose process and creating a pull request, I received the following comment: "There is no Preview here, please add one!" I thought to myself, 'What's the point of creating a function that isn't called?' I then imitated what others built to add a preview of the entire screen. I did confirm that the actual device was working without problems, but I was only looking at the actual device back then. While still wondering what Preview was for, I looked at the Split screen of Android Studio. It was then that I realized the exact screen displayed on the physical device was also visible within Android Studio! So that's what Preview is for, I thought to myself; to display it on the Split screen without having to call the function! It is also written in its official document[^1]. There is no need to deploy the app to a device or emulator. You can preview multiple specific composable functions, each with different width and height limits, font scaling, and themes. As you develop the app, the preview is updated, allowing you to quickly see your changes. [^1]:Reference: Android for Developers Preview and Operation Check One day, I was working on enhancing the route details screen of the 'My Route' app, adding images and text for the 8 directions of travel section starting from the station exit. The implementation itself was done immediately, but the problem was to check the operation. It takes a lot of time to check whether images and wording for 8 directions are added correctly. The steps to reproduce are as follows. ![Steps to reproduce the direction section display](/assets/blog/authors/romie/2024-02-08-compose-preview-beginner/01.gif =150x) Steps to reproduce the direction section display So, how can we efficiently check all 8 directions? Furthermore, if the UI display is disrupted or incorrect images are shown, it requires additional time to rectify and verify it from the beginning. This is where Preview comes into play. It is implemented as follows. @Preview @Composable fun PreviewWalkRoute() { SampleAppTheme { Surface { WalkRoute( routeDetail = RouteDetail(), point = Point( pointNo = "0", distance = "200", direction = Point.Direction.FORWARD, ), ) } } } If you view the Split screen in Android Studio through Build, this is what you can see. Direction section preview screen Just enter the direction you want to verify, and you can confirm whether the correct image and text are present. Also, it’s fine to just test only one scenario to determine if the display is broken! This approach significantly reduces the time needed for testing. Conclusion I am sure that I will continue to know more of the various aspects of Compose, including Preview, as I progress in the future. For now, I simply wanted to share my brief experience with you, illustrating how a Compose beginner was amazed by the capabilities of the preview function. Other than that, I hope you look forward to more on the journey of this newbie.
アバター
Hey, I found a bunch of NotFound error events in AWS CloudTrail! Hello. I am Kurihara from the Cloud Center of Excellence (CCoE) team at KINTO Technologies, who couldn't dislike alcohol even after watching the Japanese series "Drinking Habit 50." As Tada from my team previously introduced CCoE Activities and Providing Google Cloud Security Preset Environments , we are working every day to keep our cloud environment secure. While analyzing AWS CloudTrail logs to check the AWS health of our account, I noticed that there were a lot of NotFound-type errors on a regular basis. This may sound boring, but if you are an AWS user, chances are you have encountered the same event. Despite searching extensively on Google, I couldn't find any relevant information, so I decided to document my investigation through a blog post. Conclusion Overall, when analyzing AWS CloudTrail, NotFound-type errors via the service link crawl in the AWS Config recorder should be excluded and analyzed. Error events inevitably occur due to the behavior of AWS Config, so they should be properly filtered to reduce analysis noise. Details of Investigation KINTO Technologies has a multi-account configuration where Landing Zones are managed in AWS Control Tower in accordance with best practices for AWS multi-account management . Therefore, AWS Config manages configuration information and AWS CloudTrail manages audit logs. While analyzing AWS CloudTrail logs to check the AWS health of our account, I found that NotFound-type error events were occurring in large numbers and on a regular basis. Here are the results of the AWS Athena analysis of CloudTrail logs for about a month from a certain AWS account. This account is issued with minimal security settings and no workload has been built. -- Analyze the top of errorCode WITH filterd AS ( SELECT * FROM cloudtrail_logs WHERE errorCode IS NOT NULL ) SELECT errorCode, count(errorcode) as eventCount, count(errorCode) * 100 / (select count(*) from filterd) as errorRate FROM filterd GROUP BY errorCode eventCount errorRate ResourceNotFoundException 1,515 18 ReplicationConfigurationNotFoundError 1,112 13 ObjectLockConfigurationNotFoundError 958 11 NoSuchWebsiteConfiguration 954 11 NoSuchCORSConfiguration 952 11 InvalidRequestException 627 7 Client.RequestLimitExceeded 609 7 -- Check the frequency of occurrence of a specific erroCode SELECT date(from_iso8601_timestamp(eventtime)) as "date" count(*) as count FROM cloudtrail_logs WHERE errorcode = 'ResourceNotFoundException' GROUP BY date(from_iso8601_timestamp(eventtime)) ORDER BY "date" ASC LIMIT 5 date count 2023-10-19 52 2023-10-20 80 2023-10-21 80 2023-10-22 80 2023-10-23 80 I picked up a few error codes and looked at the AWS CloudTrail records (the actual AWS CloudTrail logs are listed at the end of this article) and found that all of them were recorded in the arn field of the userIdentity that was the access source as arn:aws:sts::${AWS_ACCOUNT_ID}:assumed-role/AWSServiceRoleForConfig/${SESSION_NAME} . This is the Service-Linked Roles attached to AWS Config. I could not figure out why NotFound would occur even though the target resource exists, but when I checked the eventName section, I realized that it is not an API to get configuration information of the resource itself, but rather for each of its dependent resources. Resource errorCode API that was called (eventName) Lambda ResourceNotFoundException GetPolicy20150331v2 S3 ReplicationConfigurationNotFoundError GetBucketReplication S3 NoSuchCORSConfiguration GetBucketCors Although it is not an error that affects the workload, we would like to eliminate it as it is noise in general monitoring and troubleshooting. To do so, we need to take non-essential actions such as "configure something in the related resource" (for example, adding a Lambda resource-based policy that allows InvokeFunction actions only from its own account). We came to the corresponding conclusion that our CCoE team excludes access from the AWS Config service-linked role when analyzing AWS CloudTrail. If you analyze with AWS Athena, it is an image of executing the following query. SELECT * FROM cloudtrail_logs WHERE userIdentity.arn not like '%AWSServiceRoleForConfig%' A Brief Deep Dive I will delve a bit further into the process of recording configuration information in AWS Config, based on insights gained during this investigation. There are two points that are not explicitly stated in the official documentation, but were found in this investigation. Dependent (supplemental) resources (I named it myself) recording behavior Frequency of recording dependent (supplemental) resources Dependent (supplemental) resource recording behavior AWS Config not only records configuration information of the resource itself, but also related resources (relationships). They are named direct relationship and indirect relationship . AWS Config derives the relationships for most resource types from the configuration field, which are called "direct" relationships. A direct relationship is a one-way connection (A→B) between a resource (A) and another resource (B), typically obtained from the describe API response of resource (A). In the past, for some resource types that AWS Config initially supported, it also captured relationships from the configurations of other resources, creating "indirect" relationships that are bidirectional (B→A). For example, the relationship between an Amazon EC2 instance and its security group is direct because the security groups are included in the describe API response for the Amazon EC2 instance. On the other hand, the relationship between a security group and an Amazon EC2 instance is indirect because describing a security group does not return any information about the instances it is associated with. As a result, when a resource configuration change is detected, AWS Config not only creates a CI for that resource, but also generates CIs for any related resources, including those with indirect relationships. For example, when AWS Config detects changes in an Amazon EC2 instance, it creates a CI for the instance and a CI for the security group that is associated with the instance. -- https://docs.aws.amazon.com/config/latest/developerguide/faq.html#faq-1 There are resources, which I name them on my own, dependent (supplemental) resource , that are separate from related resources and appear to be settings for the resource itself, but they also have separate acquisition APIs. In the case of Lambda, Lambda itself is a resource that can be obtained with GetFunction , whereas resource-based policy is another resource that can be obtained with GetPolicy . Looking at the Configuration Item (CI), the resource-based policy that is a dependent (supplemental) resource, is recorded in the supplementaryConfiguration field as follows: { "version": "1.3", "accountId": "<$AWS_ACCOUNT_ID>", "configurationItemCaptureTime": "2023-12-15T09:52:19.238Z", "configurationItemStatus": "OK", "configurationStateId": "************", "configurationItemMD5Hash": "", "arn": "arn:aws:lambda:ap-northeast-1:<$AWS_ACCOUNT_ID>:function:check-config-behavior", "resourceType": "AWS::Lambda::Function", "resourceId": "check-config-behavior", "resourceName": "check-config-behavior", "awsRegion": "ap-northeast-1", "availabilityZone": "Not Applicable", "tags": { "Purpose": "investigate" }, "relatedEvents": [], # Related resources "relationships": [ { "resourceType": "AWS::IAM::Role", "resourceName": "check-config-behavior-role-nkmqq3sh", "relationshipName": "Is associated with " } ], ... Omitted # Dependent (supplemental) resources "supplementaryConfiguration": { "Policy": "{\"Version\":\"2012-10-17\",\"Id\":\"default\",\"Statement\":[{\"Sid\":\"test-poilcy\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::<$AWS_ACCOUNT_ID>:root\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:ap-northeast-1:<$AWS_ACCOUNT_ID>:function:check-config-behavior\"}]}", "Tags": { "Purpose": "investigate" } } } Frequency of recording dependent (supplemental) resources The frequency of recording CIs in AWS Config depends on the setting of RecordingMode , but this does not seem to be the case for dependent (supplemental) resources. If it was a NotFound-type error, it may have been due to retry attempts. However, the observed behavior indicated that recording was attempted once every 12 or 24 hours. This also does not seem to be a regularity subject to the type of dependent (supplemental) resources. This is the result of my investigation, although it is quite a black box behavior. Summary The above introduced the identity of the mysterious NotFound-type error events output to AWS CloudTrail and countermeasures. The details will be further investigated in the future, but it has been confirmed that similar error events are occurring from the service-linked roles in Macie. Although AWS CloudTrail analysis is a tedious task, it is also an opportunity to gain a deeper understanding of the behavior of AWS services. Therefore, let's perform it proactively! For engineers who want to leverage AWS to its fullest, and who think Keisuke Koide is a talented actor, the Platform Group is currently seeking to hire you! Finally, I will conclude this article by listing each AWS CloudTrail error event. Thank you for reading. Lambda: ResourceNotFoundException { "eventVersion": "1.08", "userIdentity": { "type": "AssumedRole", "principalId": "************:LambdaDescribeHandlerSession", "arn": "arn:aws:sts::<$AWS_ACCOUNT_ID>:assumed-role/AWSServiceRoleForConfig/LambdaDescribeHandlerSession", "accountId": "<$AWS_ACCOUNT_ID>", "accessKeyId": "*********", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "*********", "arn": "arn:aws:iam::<$AWS_ACCOUNT_ID>:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig", "accountId": "<$AWS_ACCOUNT_ID>", "userName": "AWSServiceRoleForConfig" }, "webIdFederationData": {}, "attributes": { "creationDate": "2023-12-03T09:09:17Z", "mfaAuthenticated": "false" } }, "invokedBy": "config.amazonaws.com" }, "eventTime": "2023-12-03T09:09:19Z", "eventSource": "lambda.amazonaws.com", "eventName": "GetPolicy20150331v2", "awsRegion": "ap-northeast-1", "sourceIPAddress": "config.amazonaws.com", "userAgent": "config.amazonaws.com", "errorCode": "ResourceNotFoundException", "errorMessage": "The resource you requested does not exist.", "requestParameters": { "functionName": "**************" }, "responseElements": null, "requestID": "******************", "eventID": "******************", "readOnly": true, "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "<$AWS_ACCOUNT_ID>", "eventCategory": "Management" } S3: ReplicationConfigurationNotFoundError { "eventVersion": "1.09", "userIdentity": { "type": "AssumedRole", "principalId": "**********:AWSConfig-Describe", "arn": "arn:aws:sts::<$AWS_ACCOUNT_ID>:assumed-role/AWSServiceRoleForConfig/AWSConfig-Describe", "accountId": "<$AWS_ACCOUNT_ID>", "accessKeyId": "*************", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "*************", "arn": "arn:aws:iam::<$AWS_ACCOUNT_ID>:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig", "accountId": "<$AWS_ACCOUNT_ID>", "userName": "AWSServiceRoleForConfig" }, "attributes": { "creationDate": "2023-12-03T13:09:16Z", "mfaAuthenticated": "false" } }, "invokedBy": "config.amazonaws.com" }, "eventTime": "2023-12-03T13:09:55Z", "eventSource": "s3.amazonaws.com", "eventName": "GetBucketReplication", "awsRegion": "ap-northeast-1", "sourceIPAddress": "config.amazonaws.com", "userAgent": "config.amazonaws.com", "errorCode": "ReplicationConfigurationNotFoundError", "errorMessage": "The replication configuration was not found", "requestParameters": { "replication": "", "bucketName": "*********", "Host": "*************" }, "responseElements": null, "additionalEventData": { "SignatureVersion": "SigV4", "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "bytesTransferredIn": 0, "AuthenticationMethod": "AuthHeader", "x-amz-id-2": "**************", "bytesTransferredOut": 338 }, "requestID": "**********", "eventID": "*************", "readOnly": true, "resources": [ { "accountId": "<$AWS_ACCOUNT_ID>", "type": "AWS::S3::Bucket", "ARN": "arn:aws:s3:::***********" } ], "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "<$AWS_ACCOUNT_ID>", "vpcEndpointId": "vpce-***********", "eventCategory": "Management" } S3: NoSuchCORSConfiguration { "eventVersion": "1.09", "userIdentity": { "type": "AssumedRole", "principalId": "***********:AWSConfig-Describe", "arn": "arn:aws:sts::<$AWS_ACCOUNT_ID>:assumed-role/AWSServiceRoleForConfig/AWSConfig-Describe", "accountId": "<$AWS_ACCOUNT_ID>", "accessKeyId": "***************", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "*************", "arn": "arn:aws:iam::<$AWS_ACCOUNT_ID>:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig", "accountId": "<$AWS_ACCOUNT_ID>", "userName": "AWSServiceRoleForConfig" }, "attributes": { "creationDate": "2023-12-03T13:09:16Z", "mfaAuthenticated": "false" } }, "invokedBy": "config.amazonaws.com" }, "eventTime": "2023-12-03T13:09:55Z", "eventSource": "s3.amazonaws.com", "eventName": "GetBucketCors", "awsRegion": "ap-northeast-1", "sourceIPAddress": "config.amazonaws.com", "userAgent": "config.amazonaws.com", "errorCode": "NoSuchCORSConfiguration", "errorMessage": "The CORS configuration does not exist", "requestParameters": { "bucketName": "********", "Host": "*************************8", "cors": "" }, "responseElements": null, "additionalEventData": { "SignatureVersion": "SigV4", "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "bytesTransferredIn": 0, "AuthenticationMethod": "AuthHeader", "x-amz-id-2": "*********************", "bytesTransferredOut": 339 }, "requestID": "***********", "eventID": "*****************", "readOnly": true, "resources": [ { "accountId": "<$AWS_ACCOUNT_ID>", "type": "AWS::S3::Bucket", "ARN": "arn:aws:s3:::*************" } ], "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "<$AWS_ACCOUNT_ID>", "vpcEndpointId": "vpce-********", "eventCategory": "Management" }
アバター
はじめに こんにちは。ご覧いただきありがとうございます! KINTO FACTORY (以下 FACTORY)という今お乗りのクルマをアップグレードできるサービスで、フロントエンド開発をしている中本です。 今回は、AWS の CloudWatch RUM を用いてブラウザ等のクライアントで発生したエラーを検知する方法について、紹介させて頂きたいと思います。 導入するきっかけ 導入するきっかけとなったのが、カスタマーセンター(CC)からの連絡で、とあるユーザー様が FACTORY の WEB サイトから商品の注文へ進もうとしたところ、画面が遷移しない不具合が発生していると調査依頼を受けたことでした。 すぐさま、API ログなどを解析しエラーが発生しているか確認しましたが、これといって不具合につながるものは発見できませんでした。 そこで次に、フロントエンドでどのような機種やブラウザからアクセスされているか確認してみました。 Cloud Front のアクセスログから、該当ユーザーのアクセスを調べ User-Agent を見てみると、、 Android 10; Chrome/80.0.3987.149 割と古い Android 端末からのアクセスでした。 そのことを念頭に置き、不具合が起きているページのソースを解析したりしていると、ある FE 開発メンバーから、JavaScript の replaceAll が怪しいのではという助言が、、、 こちらの関数、Chrome version が 85 以上での対応でした。。(FACTORY の推奨環境は各ブラウザの最新版としているため、QA でもここまで古いバージョンはテストしておりませんでした) ※チーム内の他メンバーからは、 こちら にて使用する関数を検索することで、どのブラウザではどのバージョンから対応しているか、簡単に検索できることも教えてもらいました! これまでの FACTORY の監視は、BFF レイヤー以下でのエラーを検知し、PagerDuty や Slack へ通知しておりましたが、クライアンでのエラー検知はできておらず、このようにお客様からのご連絡で初めて気付ける状態でした。 このままだと、お客様から何かしらのアクションを頂かない限り、このようなクライアント側で起こるエラーに気づくことさえできないと思い、対策をすることにしました。 検知方法 もともと、FACTORY のフロントエンドには、AWS の CloudWatch RUM (Real-time User Monitoring)の client.js を読み込んでいました。しかし、この機能を特に何かに使うこともしておらず(ユーザージャーニーなどは別途 GA で分析している)、少し勿体無い状況でした。 調べていくと、RUM の仕組みによりブラウザ等のクライアント上で、JavaScript から イベントを CloudWatch へ送信できる ことを知り、 この仕組みを用いることで何かしらのエラーが起きた際、カスタムイベント送信し、検知する仕組みを作ることにしました。 通知方法 通知のおおまかな流れは以下のとおりです。 ブラウザでエラーを検知した場合、CloudWatch RUM でエラー内容をメッセージに含んだ、カスタムイベントを送信する window.crm("recordEvent", { type: "error_handle_event", data: { /* 解析に必要な情報。exception errorの中身等 */ }, }); Cloud Watch Alerm で上記イベントを検知し、イベントが発生した場合にエラー内容を SNS へ送信 上記 SNS が SQS へ通知し Lambda がメッセージを拾って OpenSearch へエラー通知(こちらの仕組みは既存の API エラーを検知・通知する仕組みを流用) 運用してみて こちらの仕組みを本番環境にも反映して、数ヶ月運用して来ましたが、幸いなことに導入のきっかけとなった JavaScript のエラー等、クリティカルな問題は発生しておりません。 ただ、検索エンジンのクローラーや bot などからの、意図しないアクセスでエラーが発生するケースも検知できており、導入するまで特に気に留めていなかったアクセスにも気付けるようになったので、監視の大事さを改めて知る戒めにもなりました。 最後に FACTORY のように WEB サイトでお買い物を頂くには、エラーが発生し商品が購入できなくなったりページが見えなくなるというケースを、できる限り防ぐ必要があります。しかし、すべてのお客様の端末・ブラウザにて動作を保証するにも限界があるかと思います。 そこで、エラーが発生してしまった場合には、できるだけわかりやすいメッセージ(対処法など)を画面へ返してあげることと、運用面として開発している我々が、発生した事実と不具合内容に早急に気づける仕組みが必要かなと思います。 今後も様々なツールや仕組みを駆使して、安定した WEB サイトの運用を目指していきたいです。
アバター
​As an authentication engineer of KINTO, Hoang Pham will present an article about Passkey, which was implemented on the Global KINTO ID platform (GKIDP). After joining “ OpenID Summit Tokyo 2024 ” and hearing about Passkey combined with OIDC, I thought that I should write something about how Passkey brings much profit to our ID platform. I. Passkey Autofill on GKIDP Passkeys are a replacement for passwords that provide faster, easier, and more secure sign-ins to websites and apps across a user’s devices. Below is how users can authenticate by passkey with a single click. ![](/assets/blog/authors/pham.hoang/Fig1.gif =400x) Fig 1. Login by Passkey with KINTO Italy IDP The beauty of Passkey demonstrated by its seamless UX exactly is the same as the “Pass word recommendations”, so users do not need to know the intricacies of what is different between a Passkey or a password. The system uses asymmetric cryptography behind without a password or anything the user must remember. Just FaceID authentication, and everything is set! Passkey is the most secure and state of the art on authentication system in the field which has been supported by Android and iOS since late 2022. It is still in development and being upgraded. To ensure our GKIDP (Global KINTO ID Platform) remains up-to-date with the latest technologies, we introduced Passkey Autofill in July 2023, just right after Mercari , Yahoo Japan , GitHub , and MoneyForward integrated it into their respective ID Platforms. In the next parts, I will explain how we leverage Passkey on Federated login and make GKIDP users more comfortable with our “Global Login” feature. II. Passkey on Federated Identity To briefly explain our product, our Global KINTO ID Platform, or GKIDP is the authentication system deployed in Italy, Brazil, Thailand, Qatar, and South American countries for the KINTO services in those locations as of March 2024. By compliance with the GDPR and data protection regulations, we separate GKIDP into multiple Identity Providers (IDPs) located in each country and identify users as one single user’s Global ID through a “Coordinator”. By leveraging Global ID, users may be able to enjoy shared benefits across KINTO services around the world. Fig 2. GKIDP and Passkey-supported IDPs In most cases (Fig. 1. Login with Passkey), users just use the local IDP for federated authentication and log in to use KINTO services inside their country. But in our case, Passkey was implemented on each of our IDP (for example, Brazil IDP) to help all RP-relying party applications or “satellite services” (for example, KINTO One Personal or other KINTO services in Brazil) include a Passkey functionality. This advantage was also mentioned at the OpenID Summit Tokyo 2024 in which we participated, so it was good to know we are on the right track to implement Passkey combined with the OpenID Connect protocol. Additionally, GKIDP has a unique feature to let users, not only log in to the KINTO or KINTO related services inside their country but also outside, if they travel or move to other countries where there are other KINTO services. We call it the “Global login” feature. It contains many steps, but it tries to solve the difficulty for users to remember multiple usernames and passwords from different countries. The implementation of a Passkey can streamline the global user login process, requiring only a few simple steps without the need to remember or input any information. For example, let’s see how the Italy KINTO Go user (same user in the example of Fig. 1) could make use of the global login to access the KINTO Share service in Thailand with just a few clicks in Fig. 3, reducing the log in experience time from an average of 2–3 minutes to around 30 seconds. Users can utilize a single Passkey to access all KINTO services, regardless of whether the local IDP supports Passkey or not. ![](/assets/blog/authors/pham.hoang/Fig3.gif =300x) Fig 3. Global Login with Passkey The passkey is not only integrated into the local login and global login processes but also into all authentication screens including re-authentications, etc. Once a Passkey is registered, users hardly need a password to verify anything anymore. III. Passkey and some interesting numbers Fig 4. Passkey registered users In our Italy IDP case, we received 875 users who registered and using Passkey, occupying 52.2% of new registrations since Passkey was released. We hope that this number will increase as users update their OS to support Passkey Autofill (iOS >16.0 and Android> 9) In Brazil, despite the focus on Desktop PC users with KINTO Brazil, where Passkey isn't widely used on Microsoft PCs, we still have more than 20% among the 1176 newly registered users. IV. Conclusion As KINTO engineers, we are very excited to introduce new technologies for a passwordless future and strengthen user data protection. Leveraging Passkey, users can log in with ease with the highest level of security with this method nowadays. We are looking forward to connect many new KINTO services to our IDP(s) hub: GKIDP. Another article from Hoang Pham: https://blog.kinto-technologies.com/posts/2022-12-02-load-balancing/
アバター
Introduction Hello! This is Wada ( @cognac_n ), a data scientist at KINTO Technologies. In January 2024, KINTO Technologies launched the " Generative AI Development Project team " and I was assigned to be a team member. This article serves as an introduction to our project. What is Generative AI? Literally, it refers to an artificial intelligence that produces new data. The release of ChatGPT by OpenAI in November 2023 thrust it into the spotlight. AI has experienced a number of temporary booms(*1), the emergence of the fourth AI boom(*2), driven by the development of generative AI, has gone beyond a mere boom and is beginning to take root in our daily lives and work. I believe that the use of generative AI, which will become increasingly popular in the future, will have such a significant impact that it will overturn many of the conventional norms in our daily lives and work. Past Initiatives The project was launched in January 2024, but we have been working on the use of generative AI for a long time. Here are just a few of our efforts: AI ChatBot developed in-house as a SlackBot for internal use (article in Japanese) External hands-on event held on the topic of generative AI (held in Nagoya, Japan) Internal promotion of generative AI tools DX of customer center operations using generative AI From planning to development of new services using generative AI And so on. However, there were many initiatives that unfortunately could not be undertaken due to our lack of resources... But now that the project has officially been established as a team in the organization, I believe we will be able to promote the use of generative AI even more broadly. I am very excited for what’s to come! What Our Project Aims For Our Mindset What we value is “contributing to the company's business activities” through technology. Our goal is to solve internal issues with overwhelming “speed, quality, and quantity” as a “problem-solving organization”. Instead of merely trying things out and critiquing, we will continue to work as an organization that focuses on value! The Impact We Want To Have On Our Company We aim to become a company where the use of generative AI is normalized by each and every employee! ... but how to arrive to that point? We could by realizing which tasks are suited for and what can be entrusted to a generative AI. By learning how to create basic prompts depending on the task. By creating a culture of acceptance of AI-generated output. Maybe elements such as the above could help. In the rapidly changing world of generative AI, what kind of shape should we aim for? I think we need to keep thinking about this. To Do So The project is currently dividing its initiatives on generative AI into three levels. Level 1: With the existing systems, "Give it a try" first Level 2: Create more value with minimal development Level 3: Maximize the value added to the business The following is an image of the level classification and how to proceed with the initiative. Level classification of initiatives on generative AI Estimating the value of initiatives while aiming for the appropriate level This does not mean that all initiatives should aim for Level 3. If sufficient value can be created at the Level 1 layer, there may be no need to spend cost and man-hours to take it to Level 2. The key is to try lots of ideas for quick wins at Level 1. For that purpose, it is ideal that all employees, including non-engineers in the company, have a high level of AI literacy to implement Level 1. What We Want To Work On In The Future From An Assisted Form of "Let’s Give It a Try" It has been several months since the introduction of in-house generative AI tools, but we still hear people saying that they don't know what they can do or when to use them. First of all, as whose with the expertise in generative AI, we are planning to increase the number of use cases where generative AI could be applied, while providing careful support for identifying its suitable tasks and in writing effective prompts. At first, with careful support, we encourage giving ideas a try Increase the number of in-house use cases of generative AI Make in-house use of generative AI the norm Towards An Autonomous Form of the "Let’s Give It a Try" Formula If we continue with the above setup, our capacity will soon become a bottleneck and the problem-solving won't be scalable if we're constantly providing assistance. We would like to encourage those responsible for operations to recognize tasks suitable for AI and to entrust them to generative AI by 'trying out' the Level 1 model with basic prompts for this purpose. Enabling operational teams to make use of Level 1 themselves We instead offer advice and consultancy for improving Level 1 ideas, or on how to take them to Level 2 models Trainings To Achieve These Goals We will enhance in-house training to raise the level of AI literacy among employees. The goal is to foster a culture where many employees share a common understanding of generative AI, enabling smooth conversations about its use and acceptance of its output. Enhance in-house IT literacy trainings Tailor trainings according to job type and skill level Conduct trainings with fine granularity, such as image generation, summarization, and translation Provide trainings that are truly necessary based on trainees' feedback, with a quick turnaround time Sharing Information We share our initiatives across various media platforms, including this Tech Blog. We plan to release a variety of content, including technical reviews of generative AI and introductions to the project's initiatives. We hope you look forward to it! Conclusion Thank you for reading my article all the way to the end! It was a lot of abstract talk, but I hope this will be helpful to those who, like us, are seeking to leverage generative AI. References [*1] Ministry of Internal Affairs and Communications. "History of Artificial Intelligence (AI) Research". (See 2024-01-16) [*2] Nomura Research Institute. "Future landscapes changed by Generative AI". (See 2024-01-16)
アバター
はじめに こんにちは! KINTO テクノロジーズ プロジェクト推進 G の Ren.M です。 私は普段、 KINTO ONE(中古車) のフロントエンド開発をしています。 今回は TypeScript の基礎である型定義についてご紹介させていただきます。 この記事の対象者 TypeScript の型定義について学びたい方 JavaScript の次に TypeScript を学びたい方 TypeScript とは そもそも TypeScript とは JavaScript を拡張した言語になります。 そのため JavaScript と同様の構文が使えます。 従来の JavaScript ではデータ型の宣言が不要である種自由にプログラムを記述することができました。 しかしプログラムの品質向上などが求められ、型の不一致などを防ぐ必要があります。 そこで静的型付けを用いる TypeScript が使用されるようになりました。 型定義について理解することでスムーズにコーディングができ安全なデータに受け渡しをすることができます。 JavaScript との違い JavaScript では以下のデータ型の異なる代入が可能です。 let value = 1; value = "Hello"; しかし TypeScript では以下ような挙動になります。 let value = 1; // number型でないため代入不可 value = "Hello"; // 同じnumber型のため代入可能 value = 2; 主なデータ型の種類 // string型 const name: string = "Taro"; // number型 const age: number = 1; // boolean型 const flg: boolean = true; // array string型 const array: string[] = ["apple", "banana", "grape"]; : の後に型を明示的に定義することを「型アノテーション」と呼びます。 型推論 TypeScript では上記のように型アノテーションを使わなくても自動で型をつけてくれます。 これを型推論といいます。 let name = "Taro"; // string型 // Bad:nameはstring型のため代入できない name = 1; // Good:string型のため代入できる name = "Ken"; 配列の型定義 // number型のみ許容する配列 const arrayA: number[] = [1, 2, 3]; // number型 or string型のみ許容する配列 const arrayB: (number | string)[] = [1, 2, "hoge"]; interface オブジェクトの型定義は interface を使用できます。 interface PROFILE { name: string; age?: number; } const personA: PROFILE = { name: "Taro", age: 22, }; 上記の age のようにキー要素に後ろに「?」を付与することでプロパティを任意にすることもできます。 // 'age'要素がなくてもOK const personB: PROFILE = { name: "Kenji", }; Intersection Types 複数の型を結合したものを Intersection Types と言います。 下記だと STAFF が該当します。 type PROFILE = { name: string; age: number; }; type JOB = { office: string; category: string; }; type STAFF = PROFILE & JOB; const personA: STAFF = { name: "Jiro", age: 29, office: "Tokyo", category: "Engineer", }; Union Types |(パイプ)を用いることで 2 つ以上の型を定義することができます。 let value: string | null = "text"; // Good value = "kinto"; // Good value = null; // Bad value = 1; 配列の場合 let arrayUni: (number | null)[]; // Good arrayUni = [1, 2, null]; // Bad arrayUni = [1, 2, "kinto"]; Literal Types 代入可能な値を明示的に型にすることもできます。 let fruits: "apple" | "banana" | "grape"; // Good fruits = "apple"; // Bad fruits = "melon"; typeof 宣言済みの変数などから型を継承したい場合は typeof を使います。 let message: string = "Hello"; // messageのstring型を継承 let newMessage: typeof message = "Hello World"; // Bad newMessage = 1; keyof オブジェクトの型からプロパティ名(キー)を型とするのが keyof になります。 type KEYS = { first: string; second: string; }; let value: keyof KEYS; // Good value = "first"; value = "second"; // Bad value = "third"; enum enum(列挙型) は自動で連番をつけてくれる機能になります。 下記だと SOCCER に 0、 BASEBALL に 1 が割り当てられます。 enum を用いることで可読性が高まり、メンテナンスしやすくなります。 enum SPORTS { SOCCER, BASEBALL, } interface STUDENT { name: string; club: SPORTS; } // clubに1が割り当てられる const studentA: STUDENT = { name: "Ken", club: SPORTS.BASEBALL, }; Generics Generics を用いることで使用する度に型を宣言することができます。 同じようなコードを別の型で繰り返す場合などに役立ちます。 慣習的に T などが使われることが多いです。 interface GEN<T> { msg: T; } // 使用する際にTの型を宣言する const genA: GEN<string> = { msg: "Hello" }; const genB: GEN<number> = { msg: 2 }; // Bad const genC: GEN<number> = { msg: "message" }; デフォルトの型を定義すると <string> などの宣言が任意になります。 interface GEN<T = string> { msg: T; } const genA: GEN = { msg: "Hello" }; また extends を併せて用いることで使用できる型を制限することができます。 interface GEN<T extends string | number> { msg: T; } // Good const genA: GEN<string> = { msg: "Hello" }; // Good const genB: GEN<number> = { msg: 2 }; // Bad const genC: GEN<boolean> = { msg: true }; 関数で使う場合 function func<T>(value: T) { return value; } func<string>("Hello"); // <number>がなくてもよい func(1); // 型は複数でもよい func<string | null>(null); 関数で extends を使う場合 function func<T extends string>(value: T) { return value; } // Good func<string>("Hello"); // Bad func<number>(123); interface と併せて使う場合 interface Props { name: string; } function func<T extends Props>(value: T) { return value; } // Good func({ name: "Taro" }); // Bad func({ name: 123 }); おわりに いかがだったでしょうか。 今回は TypeScript の基礎について一部ご紹介しました。 TypeScript はフロントエンドの現場で使われていること増えてきており、 採用することでデータ型の不一致を防ぎバグの少ない安全な開発ができると思います。 少しでもこの記事がお役に立てば幸いです! テックブログには他に様々な記事がありますのでよければご覧ください!
アバター
Introduction Hello. I'm Chris, and I do frontend development in the Global Development Division at KINTO Technologies. Today, I will talk about a somewhat common problem in frontend development and how to solve it! The Problem Sometimes you want to use an anchor tag ( tag) to make the user scroll to a specific part of a page like below. You can achieve this by giving an id to the element you want to scroll to and adding href="#{id}" to the tag. <a href="#section-1">Section 1</a> <a href="#section-2">Section 2</a> <a href="#section-3">Section 3</a> <section class="section" id="section-1"> Section 1 </section> <section class="section" id="section-2"> Section 2 </section> <section class="section" id="section-3"> Section 3 </section> It’s useful for users when you have long pages like articles and rules. However, there are often fixed elements at the top a page, such as headers, which gets slightly misaligned after clicking on a link and scrolling. For example, suppose you have the following header. <style> header { position: fixed; top: 0; width: 100%; height: 80px; background-color: #989898; opacity: 0.8; } </style> <header style=""> <a href="#section-1">......</a> <a href="#section-2">......</a> <a href="#section-3">......</a> ... </header> I intentionally made this header a little transparent. you can see that some of the content is hidden behind the header after the a-link is clicked on. How To Solve With Just HTML and CSS You can solve this problem by getting the height of the header with JavaScript when the a-link is clicked on, then subtracting the height of the header from the scroll position before scrolling. For this article, however, I want to show you a solution that uses only HTML and CSS. To be more specific, you can prepare another <div> a little above the <section> you want to reach and make the user scroll to that element. Going back to the previous example, we will first create a div tag in each section. Then assign a class to the div tag, such as anchor-offset , and move the id that was originally assigned to the <section> tag to the newly created div tag. <section> <div class="anchor-offset" id="section-1"></div> <h1>Section 1</h1> ... </section> Then use CSS to style the <section> tag and .anchor-offset . /* use classes if you want to add only the elements that need to be anchored */ section { position: relative; } .anchor-offset { position: absolute; height: 80px; top: -80px; visibility: hidden; } With the above settings, when the user clicks on the a-link, they will scroll to a little above the corresponding <section> position (80 px in our example), and the height of the header (80 px) will be offset. How to write in Vue Vue allows you to bind values to CSS . If you use this function to dynamically set the height and make it a component, it will be easier to maintain. <template> <div :id="props.target" class="anchor-offset"></div> </template> <script setup> const props = defineProps({ target: String, offset: Number, }) const height = computed(() => { return `${props.offset}px` }) const top = computed(() => { return `-${props.offset}px` }) </script> <style scoped lang="scss"> .anchor-offset { position: absolute; height: v-bind('height'); top: v-bind('top'); visibility: hidden; } </style> Summary This is how you can adjust the scroll position to match fixed elements such as headers when the user scrolls to a specific part of the page with an tag. Although there are many other solutions, I hope this one helps you!
アバター
Introduction & What’s the story? I am Yuki T. from the Global Development Division. I am responsible for the Operation and Maintenance of products for the global market. Our team members in the Global Development Division come from diverse nationalities and speak a variety of different languages. Among them, my team consists of both team members who don’t speak Japanese as well as those who don’t speak English. So you could say we have made huge efforts (struggles?) to establish communication within the team. In this article I would like to introduce the content of these efforts and the insights gained in the process. Conclusion - If you can’t speak English, Japanese is Okay. - If you can’t speak, at least try writing. But if you do, write it precisely. - It’s hard, but it’s worth the effort. Introduction (What Kind of Team Are We?) I will share now a bit about my team (operations and maintenance) within the Global Development Division. We are 8 members. The member composition and work procedures are as follows. Nationality A mix of full-time employees and outsourcing company members. The team has been around for about a year now. At the beginning, we were only Japanese members. But a foreign team mate also joined after a while. Work Style a hybrid of remote and in-office. Agile Development (Scrum). On the different Scrum events, a mix of remote and in-office members join. Communication Slack for communication, Teams for meetings mostly. Atlassian Jira and Confluence for task and document management. The language proficiency of the members varies, but the majority of them are Japanese (8 out of 6). Classification Language proficiency Number of people A English only. Does not speak Japanese at all (foreign nationality) 1 B Mainly English. Daily conversational level of Japanese (foreign nationality) 1 C Mainly Japanese Daily conversational level of English (Japanese nationality) 2 D Mainly Japanese Can’t speak English. Can read and write a little (Japanese and foreign nationality) 4 By the way, I (of Japanese nationality) would be "C" above. About TOEIC 800. I can speak a little bit, but when it comes to complicated discussions, my lack of vocabulary is immediately apparent. And I’m pretty bad when it comes to listening skills. Next (Done - Learned - Next step) When the team was first formed, the group consisted mainly of "C" and "D" level members (hereinafter referred to as "Japanese members"), and communication was mainly in Japanese.[^2] We tried various things to get English-speaking people to "A" and "B" level (hereinafter referred to as "English members") to join this team. Here is a summary of the results from the retrospective method[^3] (Done - Learned - Next Steps). I will divide each situation in three categories: 1. Contact method (Slack), 2. Meetings (Teams), and 3. Documents (Confluence and Jira). [^2]: Since the entire Global Development Department comprises many members of foreign nationalities (approximately 50%), we operate in an environment where a lot of communication and documentation is in English. [^3]: Done - Learned - Next step | Glossary | JMA Consultants Inc. https://www.jmac.co.jp/glossary/n-z/ywt.html 1. Contact Method (Slack) Done "Even if you don’t understand Japanese, you can read it by copying and pasting it into a translation tool, right?" Learned "Translation is readable only at the beginning." It is quite annoying to translate by copy-pasting each and every time (you’ll see if you try it yourself). Even if you are mentioned, there are surprisingly many cases in which it is not really related to you, leading to a loss of motivation for copy-and-paste translations. It feels like a wasted effort. In many cases, Slack also requires reading the entire thread to grasp the meaning, not just single messages. This also seems to contribute to the difficulty of translation. Next step "Let’s write in both Japanese and English." Important messages were also written in English. The point is not to translate entire Japanese texts. I didn’t find any good examples to publish here, but for example: Simplify the issue in English for easy understanding. For details, translate the remaining text independently or inquire about them separately. It is difficult for the sender to translate everything. 2. Meetings (Teams) Done ① "I’ll be speaking in Japanese, so use the Teams translator function to read the subtitles." ② "Even if you’re not good at English, try your best to speak in English!" ③ "OK, then I’ll translate everything!" Learned ① "I don’t understand what it means even after reading." The conclusion is that the accuracy of machine translation between colloquial Japanese and English is still low. In particular, Japanese in a casual meeting with a small number of people has various adverse conditions for machine translation, such as stagnant speech, ambiguous subjects and objects, and multiple people speaking at the same time. ② "No one is satisfied." With effort, speaking in broken English, yet neither the Japanese nor the English members can understand. Also, if you don’t know what to say in English, you don’t speak in the first place, so everyone became quieter compared to when speaking in Japanese. The meetings ended quicker, but with little information to be gained. ③ "Never-ending meetings" Since I have to speak in English after the Japanese member speaks, it simply took twice as long. In addition, with my English being just a little better than daily conversational level,I often got stuck on how to translate, and that extended the time even more.And while we are speaking in English, the Japanese members would be just waiting. As a result, meetings tended to be sloppy. Next step "If you are not good at English, you can use Japanese" I made it so that people who are not good at English could speak in Japanese. I then decided to focus on the content relevant to the English-speaking members and serve as the interpreter. This has helped to keep meeting times as short as possible. "If you can’t speak, you can at least write about it." But if this is all, the amount of information conveyed to English-speaking members will be reduced. So I instructed them to write meeting notes with as much detail as possible. By doing so, even if you do not understand it on the spot, you can read it later using the browser’s translation function. Incidentally, because we write down the words as we hear them, the notes may be a mixture of Japanese and English. "Still, effort is required." Still, there are situations like Sprint Retrospectives where you have to convey the meaning in real time and not later. In such cases, I add translations on the spot, even if it takes time. For example (in blue) ![Example of Retrospective comment](/assets/blog/authors/yuki.t/image-sample-retro.png =428x) In the case of Sprint Retrospective, while everyone is verbally explaining their ideas on Keep or Problem, I make good use of the gap time by adding translations. 3. Document (Jira and Confluence) Done "I’ll write it in Japanese, so use your browser’s translate function to read it." Learned "Confluence relatively is OK, but Jira is a bit tough." Design documents and specifications mainly on Confluence are translated relatively well. Also, many of the documents of the Global Development Division are originally written in English, so there is no need to worry about that. However, the translation accuracy of the comments on Jira tickets was poor. The main reason seems to be that unlike official documents, due to how Japanese sentences are structured, comments on tickets often omit the subject or object in them. There are also personal notes left in Japanese that not even native Japanese speakers would understand, so in a way it is natural in some cases. Next step "Write accurately and concisely" So we tried to write without omitting the subject, predicate, and object. We also tried to write as concisely as possible (bullet points were recommended). This increased the accuracy of the machine translation on browsers. Gains Thanks to these "next step" initiatives, communication within the team is now functioning to some extent. In addition, the following benefits were also found. More Information on Record We all developed the habit of taking notes, even for small meetings. As a result, we have less trouble checking previous meetings and asking ourselves, "Do you remember what was the conclusion that time?" Less Tacit Understanding To translate into accurate English, it is necessary to clarify the subject and object implied in the Japanese context. This provided us with more opportunity to clearly define "who" will undertake this task and "what" is the target of the change. If you try, you’ll realize how surprisingly often the "who" and "what" are not clearly defined in meetings. In such situations, you will have more opportunities to check, "Was XX-san in charge of this?" This can also reduce the number of tasks left unadressed. Moreover, I sometimes hesitated to inquire, "I wonder if XX-san would do it, but I don’t feel comfortable asking..." but having the purpose of "translating into English" made it easier to clarify such questions. More Diverse Opinions can be Expressed/Obtained I feel that the reduction of tacit understanding and clear communication has led to "being able to say what we want to say and express diverse opinions." In addition, we are now able to incorporate more opinions from English-speaking members, which has allowed us to gain perspectives that would be difficult to notice from Japanese-speaking members alone. For example, the following idea from Try ![Example of retrospective comment](/assets/blog/authors/yuki.t/image-sample-retro.png =428x) This was a Try from a Problem that said, "I didn’t accurately write the background and purpose of the task in the ticket," which comment is pretty serious (sorry for that) as is common in Japan. In comparison, the second English-speaking member’s suggestion to "Let’s approach it calmly" came from a completely different perspective, which made me think, "Hm, I see." Summary It takes a lot of effort to communicate when multiple languages are involved. However, I feel that these challenges not only affect immediate communication but also lead to new insights and the creation of proactive opinions. "Diversity is a benefit and an asset, not an obligation or cost." With this in mind, I am committed to furthering this effort.
アバター
try! Swift Tokyo 2024のスタッフしてきました 子育てがちょっと落ち着いてきて、そろそろ外部活動していきたいと思っていたところに try! Swift Tokyo 2024の当日スタッフを募集していたので、応募してみました! 実は参加者としても参加したことがなかったので、会場の雰囲気もわからない状態で申し込みました 😅 今回は、そのスタッフとしての活動について書いていきたいと思います。 try! Swift Tokyo 2024とは try! Swift Tokyo 2024は、2024年3月に開催されるiOS開発者向けのカンファレンスです。 2016年から開催されている、iOS開発者のためのカンファレンスとして国内最大規模のイベントです。 COVID-19で長いこと中止になっていたんですが、ついに今年、実に5年ぶりに開催されることになりました。 詳細は 公式サイト をご覧ください。 僕の感覚として、同じように大きなiOSカンファレンスで有名なのがiOSDCですが、日本国内をメインにプロポーザルを募ってタイムテーブルを作るのに対して、 try! Swiftでは海外からもプロポーザルを募って、海外の著名なエンジニアに来てもらってタイムテーブルを作っているので、英語でのコミュニケーションが必要になる場面が多かったです。 スタッフ活動 今回は、当日スタッフとしての活動を行いました。 裏方としての活動は初めてだったのですが、とても刺激的で楽しい経験でした。 開催一週前にスタッフ全員の顔合わせ。 そのときに担当の割り振りがありました。 僕の担当は会場担当で、具体的に以下のようなことを行いました。 会場の準備 参加者の誘導 会場の案内 お昼の弁当配布 ゴミ回収 会場の撤去 その他、会場内での雑務 ![](/assets/blog/authors/HiroyaHinomori/IMG_2773.jpg =400x) 普段はプログラムを書いていることが多いので、3日間身体が持つか心配でしたが、それよりも体を動かし、人と接する作業は新鮮でした。 特に、参加者の受付や会場の案内は、参加者と直接コミュニケーションを取ることができて楽しかったです。 ただ、このtry! Swiftは海外のスピーカーや参加者も多く、英語でのコミュニケーションも必要だったので、 自分の英語スキルの足りなさを痛感させられました。 5年ぶりの開催ということもあり、スタッフも僕を含め新しいメンバーが多く、初めは戸惑うことも多かったですが、 1日目を終えるころには、みんなで協力して楽しく活動できるようになりました。 ![](/assets/blog/authors/HiroyaHinomori/IMG_2784.jpg =400x) 2日目の会場撤収時に残ったスポンサーボードに色んな人がサイン書いてくれていたのがまた良い感じでした 👍 撤収後に参加したAfter Partyでは、参加者とスタッフが一緒に楽しむことができ、そこでも新しい出会いがありとても刺激になりました。 最後の3日目は、ワークショップが開催され、参加者のみなさん熱量高く参加されていたので、見ているこちらもモチベーションが上がりました 💪 ちょっと時間ができたので、スタッフ間の情報交換をしたり良い時間を過ごせました。 完全撤収後の打ち上げで食べたシュラスコも美味しかった 😋 おわりに ![](/assets/blog/authors/HiroyaHinomori/IMG_2804.jpg =400x) もっと写真を撮りたかったんですが、作業に集中してほとんど写真を撮れなかったのが悔やまれます... スタッフとして参加することで、 参加するだけでは絶対に得られない新しい出会いや刺激を得られました。 とても良い経験だったと思います。 次回もチャンスがあればスタッフやりたいです! ぜひこの記事を読んでいるみなさんもカンファレンスのスタッフにチャレンジしてみてください! Finally, I'd like to say THANK YOU to all the organizers, speakers, and other participants!!! See you again 👍
アバター
We Held an Internal LT Event! Hello, I am Ryomm. I joined KINTO Technologies in October 2023. I am mainly in the iOS team developing an app called my route by KINTO . We held an internal Lightning Talk (LT) event at our Muromachi Office in Tokyo. So today, I’m delighted to convey this experience to you through this report! Event Background At a one-on-one meeting with my boss, we talked about how we would like to hold casual Lightning Talks since we haven’t had the opportunity to speak in front of others recently. I learned that other offices are doing it under the name of information-sharing meetings. So (On November 21), I posted on my Slack channel how I wanted to hold this event, and the conversation progressed without a hitch. Post on times (On November 27) A kick-off meeting was held by a group of volunteers. On the Committee Channel (On November 29) An announcement was made at a meeting attended by all employees, informing them that the venue will be the Muromachi Office. ![Muromachi Office Channel](/assets/blog/authors/ryomm/2023-12-28-LightningTalks/03.png =400x) Muromachi Office Channel A cute flyer was made! (On December 14) The timetable was announced. ![Cute timetable](/assets/blog/authors/ryomm/2023-12-28-LightningTalks/05.png =300x) A cute timetable was made! (On December 21) The LT event was held! At the venue The Lightning Talk came together really fast, just a month after we first talked about wanting to do it! Thanks to the active participation of the Tech Blog team and many others, I think it was a very enjoyable meeting. In addition, with the help of the Corporate IT Group, we successfully conducted a comprehensive Zoom live stream for the event! I was worried that we wouldn’t be able to get enough speakers, but we ended up with 12 willing participants in the end. (Some of them even came all the way from Nagoya!) At first, we started organizing it informally and just out of fun, so I believe the LT event was possible thanks to the collaboration of everyone involved. Lightning Talks The talks were casually organized in various ways since they were for internal use only, meaning not all contents can be shared publicly. Nevertheless, here’s a summary of a few. Tag Based Release with GitHub Flow+Release Please ⛩ (Torii’s) Lightning Talk. His LT explained each of GitHub Flow (including comparison with Git Flow), Tag Based Release, and Release Please, and how integrating these would be useful to automatically generate CHANGELOG and simplify version control. I thought that I would like to try Release Please, because it addresses my concerns of not wanting to release certain features yet since development of another version is currently ongoing. ⛩’s LT A Fan’s Way to Enjoy Formula 1 (F1), The Pinnacle of Motorsports mt_takao’s Lightning Talk. It was a LT introducing the excitement of F1. This LT is unique to an automotive company! He emphasized that in the realm of F1, even a one-second gap is noteworthy, and the strategies to bridge the 1/1000th of a second difference are what adds intrigue to F1! The last part of his talk was an info sharing about the "2024 FIA F1 World Championship Series MSC CRUISES Japan Grand Prix Race" to be held at the Suzuka Circuit in Mie Prefecture from April 5 (Fri.) to 7 (Sun.), 2024! Wow! It was an LT that definitely made me want to attend a race in person! mt_takao’s LT Toward the January Development Organization Headquarters Meeting Aritome’s Lightning Talk. It was an LT about his career journey, the lessons he gained along the way, his thoughts on KINTO Technologies and the strategies for working energetically and with vitality. I am also looking forward to the first large-scale in-house event in January! Why Don’t You Let People Know About KINTO Technologies? HOKA’s Lightning Talk. She talked about her own experience as a public relations professional and her efforts to raise awareness at KINTO Technologies, as well as his involvement in organizational human resources and recruitment. Additionally, she asked for cooperation in promoting KINTO Technologies. The easiest way to do so is to reposting via X, so I did it right away. 😎✨ Hoka’s LT Sketchy Investment​ Hasegawa’s Lightning Talk. Studying English is a super high return investment! It was a LT encouraging Hasegawa’s style of study and English learning! At the end, it was concluded with a request to consider introducing English learning assistance to the company, and the audience got excited. It also made me think I should study English too. It was a very energetic LT! Hasegawa’s LT Lowering the Bar for LT Speakers + Announcement of Agile Meeting Kinchan’s Lightning Talk. He defined LT as a place to convey what you like or what you find great, and by conveying "your attributes × your likes and specialties," you can create a fun LT with originality! It was an inspiring talk that motivated participation in LTs! Perhaps because of this LT, about 70% of participants expressed interest in speaking at the next LT in the post-event survey. It was also the hottest LT that received the most votes in the "Best LT" poll. Kinchan’s LT The Impact of Tech Blog Posting on Your Career Three Years Later Nakanishi’s Lightning Talk. He talked about how continuing to write on the Tech Blog can improve your skills and potentially lead to book publications. He brought his real publication - a book of 1087 pages! I was surprised that consistent contribution on the Tech Blog eventually lead to publishing such a substantial book. https://www.amazon.co.jp/dp/486354104X/ Let Bookings Do Troublesome Scheduling A Lightning Talk by Gishin of technology and trust. This LT was about Microsoft Bookings, which makes it easy to create booking sites, Teams/Outlook integration, surveys, reminder emails, as well as scheduling! It was actually used in this year’s medical checkup, and there were some participants who responded like, "Ooh, you mean That one!" (It was before I joined the company so I have not personally seen it...) It was also interesting to hear that the service based on Exchange had caused instability and unexpected situations. I thought I should use it next time we organize a drinking party! Already in use! A LT that Makes You Want To Go See "Motorsports" in Five Minutes Cinnamoroll’s Lightning Talk. He picked up seven recommended motorsports to watch in Japan, and encouraged us visit the circuits to see them in person! He emphasized that the sound of the engines, the realism and intensity, the actual course conditions, the exhibits at the venue, and the transportation by car are all unique to the circuits that should be experienced, at least once. And going to the circuits by car is definitely recommended. If you like cars, you should also subscribe to KINTO and use the easy application app! Mr. Cinnamoroll did not forget to advertise our service too. LOL. He delivered his talk wearing the team uniform sponsored by KINTO, and it was a great LT that conveyed his passion! Cinnamoroll’s LT I Made Dumb Apps and Stuffed the Chimney♪ Hasty Ryomm Santa Claus’s Lightning Talk. My LT was about the Advent Calendar I posted for personal development. I shared my own experiences participating in the "Dumb App Hackathon" and "Dumb App Advent Calendar." After I gave this LT, some of you tried the Dumb App Advent Calendar and completed it in 3 days, which was great! Ryomm’s LT Someone wrote the Advent Calendar after listening to my LT https://speakerdeck.com/ktcryomm/kusoapurizuo-tuteyan-tu-jie-meta Corporate IT: A new journey! Flo’s Lightning Talk. This LT was about her career, her work in the Corporate IT Group, and her future prospects! She was amazing because she said her job is to ensure that KINTO Technologies’ employees work happily. LOL Conclusion By limiting the event to in-house members, we were able to create an opportunity for casual public speaking, and I think it was an enjoyable event that also provided an opportunity for employees to interact with each other. It must have been a great success as a launch event for the Muromachi information sharing meeting! Regarding my personal initiative, I noticed that KINTO Technologies was relatively quiet on Slack. So I also created a live channel to encourage more active text communication, both online and offline. At the same time, the event record was kept so that it could be reviewed later to observe the reactions to the LTs. Additionally, the excitement of the event was visualized to draw interest from non-participants for future events. We gathered survey responses into spreadsheets using Slack Workflow, ensuring that participants could enjoy the event without needing to leave Slack as much as possible. As a reflection point, I disabled the chat for the Zoom webinar used for streaming, but left the reactions enabled, so there was a certain number of people just watching. I would like to make use of it from next time. As it was the holiday season this time, we prepared Christmas costumes and Chammery (a non-alcoholic sparkling wine). As a surprise discovery, we realized that planning food and drinks was easier to decide when matching them to the events being held! We have received requests from both participants and volunteers to hold a second LT event, so I would like to connect the next one in with some other event. We had a year-end party after the LT event, and it was great to hear people from other offices say that they watched the stream. The presenters also said that they were happy to receive feedback, so I am glad we held the event! It would be great if we could continue to set up these kinds of casual presentations and hear stories from different people in the company.
アバター
​ Hello! ( º∀º )/ This is Yukachi from the Budget Control Group and the Tech Blog project team. Today, December 24th, is the Arima Kinen! My fave is Sole Oriens🏇 Given that all the horses are at a high level, I look forward to seeing what stories they'll bring to this year! How are you these days? Back to the topic, today I'd like to write about our (self-proclaimed) U-35 team members’ Meetup with our President. Until the meetup I joined the company in December 2020, a full three years ago! I thought that the number of employees was small at that time, so I checked the number and their age ranges. 20s: 5, 30s: 14, total: 58 employees (in the former Development Organization Division of KINTO Co., Ltd. before the establishment of KINTO Technologies Corporation) ! !! ! Too little...! ! And as of now, 20s: 43, 30s: 131, total: 360 employees! What a growth...! !! Regarding the behind-the-scenes of our growth, you can check HOKA 's article. ☟ Let's Make a Human Resources Group: How the Organization Rapidly Gained 300 Employees in 3 Years | KINTO Tech Blog When we were less people, we used to have company-wide drinking parties, fostering cross-department connections more often. But with the current large number of people, we are limited to individual group gatherings. The constant influx of new hires makes it challenging to keep track of who is who... However, Chimrin , known for coming up with great ideas, planned this event! She always says, "Let's try this!" and comes up with various ideas. "We don't have the opportunity to interact directly with the president!" "I want more involvement with other divisions!" In response to these voices, "Let's host a exchange event between the president and young people!" This is how we came to host the meetup. We all work together to give it shape. And so it goes! This time, President Kotera shared with us various aspects of his life, including his school days, past career, current position, and future vision. This part lasted one hour, but many people in the survey said they wanted to hear more. Kotera-san is indeed a good storyteller! Well, I took some good photos for this article so I'm going to create a corner to show you. Hopefully, it will help convey the atmosphere! Excellent moderator, thank you! Interaction between different divisions that are not usually involved! New hires! Kotera-san was speaking earnestly, so I couldn't turn the camera on him much. Triple Yuji People tend to take photos at the photo spot. Everyone pitches in and cleaned up together cheerfully. Even after the meetup ended and Kotera-san left, the excitement lingered. A similar gathering was arranged for Manager Ueda, who came to check on his team members as they didn't come back. He began with, "I'm delighted this happened," and spoke passionately. When I asked the employees in their 20s who didn't make it this time for the reason of absence, they replied, "I thought it would be a more formal meeting." "I was scared to talk to the president!" As you can see from the pictures, it was a relaxed gathering, and Kotera-san is super friendly and enjoys socializing with different people over a drink! Selected survey comments! I learned that KINTO services are consistently good and emphasize the importance of car subscription. I thought it is difficult to create new services, but I learned the importance of continuing and connecting with people. Hearing about the origins of KINTO Technologies directly from Kotera-san, who has been involved in its establishment, helped me to understand it with a sense of reality. I felt that I was able to get to know Kotera-san better through his stories, his way of speaking, and his facial expressions. I used to work thinking only about the future, but after listening to Kotera-san's way of working, I realized that it is more important to do my best in the present. It was an opportunity to get to know Kotera-san's personality as well as to get to know people around my age. It was nice to be able to interact with people who I do not have any involvement in my daily work. I was able to share similar concerns with people in my age! I'm glad to see that the survey results suggest a high level of satisfaction. Also Kotera-san mentioned in his closing remarks, "Honestly, it was a lot of fun!" The second event is already scheduled to take place! ! Conclusion Relations with other divisions tend to be biased and it is difficult to get involved without a chance to do so. I am not an engineer, but I get lots of support from engineers of my age. For example: Chimrin 's article, ✏ A Story of Simplifying Book Management Methods Comes from someone saying: "This looks like a lot of work to manage. I wonder if using Jira would reduce the burden on Yukachi." So this is a story of love and inspiration that was created by a group of dedicated volunteers! Let me boast about it here. As Kotera-san mentioned during this meetup, human connections are vital in the workplace. The more people you know in various departments, the easier you can seek advice and support around when encountering challenges and fostering collaboration to accomplish tasks effectively, which leads to a better work environment. I think it is good to have as many chances as possible to make peers this way, so I'd like to continue to provide such opportunities together with everyone! Well, thank you for reading my article all the way to the end! Merry Christmas!
アバター
Introduction When I mention that I had no experience in the web industry before joining KINTO Technologies, I get looks of surprise. The puzzled look on their faces, wondering how someone with such a career could (and made it to) join the company. What’s more, this guy is in his 40s, unlike young ones here in their 20s! Walking in a Different World Originally, I worked as a programmer developing embedded software for home appliances. From there, I experienced working in control software for automotive ECUs. In my previous job, I was a project manager at a European company, introducing test equipment for engines. The software was just one part of the entire system, which comprised various elements such as mechanical, electrical, measuring, fluidics, simulations, and more. From a world of neutral grounding system of three-phase four-wire distribution system, pressure drop and torsional vibration analysis of heat exchanger for flange connection of piping and CFD , to a world of modern architecture in the cloud. It’s been a little more than three years since I jumped into a completely different world. That is why today I’d like to look back on the journey I’ve taken. ![a person in his 40s with no web experience](/assets/blog/authors/hamatani/20231223/40s_beginner.jpg =480x) Here’s what it looks like to have a 40-something web inexperienced person described by generative AI Work at the Production Group I was assigned to the Production Group, not in an engineering position where I would actually be coding web systems. Internally, the Production Group is called "Pro G." Currently, we are four people working and one team member is on childcare leave. We are the smallest group in KINTO Technologies. Boundary Spanner When talking about the role of the Production Group, I think the closest thing I can think of is that we are boundary spanners who connect people and organizations. Our job involves collaborating with members of the business department to identify the type of system needed to achieve KINTO’s goals and connecting them with the system development department. Among these tasks, I am mainly in charge of conducting "system concept studies" in the most upstream process of large to medium-sized projects. ![Production Group](/assets/blog/authors/hamatani/20231223/produce_group.jpg =480x) The Production Group connecting business and systems (They all look so young) Do Not Overengineer So what kind of systems should be developed? While it is a prerequisite that business needs are met, I do not think that is enough. It is crucial not to overengineer. The initial requests for systemization tend to be rich and packed with a lot of content. Out of those, we try to understand the core requirements of the business side, identify the functions that are really needed, and bring them to a development scope that is necessary and sufficient. Most of the businesses that KINTO handles are unprecedented. Even if you imagine a system in your mind beforehand, it may not be all that useful in practice. There are things that you can only find out by doing. I think it is best to start small with the minimum necessary system first, then grow the system step by step as the business grows. Also, since the in-house development teams are a valuable asset of KINTO, its resources must be utilized without waste. Each project must be allocated efficiently according to its priority and target dates. By making the development scope compact, we can develop as quickly as possible and bring products to the market as soon as possible. This sense of speed is part of KINTO’s culture and its strength.  Colleagues in charge of the business side may be unfamiliar with creating requests to the system development side. Although they may have used systems before, many of them have never been involved in creating one from scratch, and this situation could be completely new to them. It is also said that developing a system from scratch is similar to building a house. But it’s a house that not even the owner knows exactly how he wants it, as it’s his first time building one. Production Group also plays a role in leading such team members. Because we are an in-house development organization, we can work with and sometimes lead the business department side on an equal footing, proposing a balanced system that is not overengineered. This awareness has spread throughout KINTO Technologies, but I think the mission of Production Group is at the core of that.  A Hunting Tribe The Production Group is basically like a hunting tribe where each goes out to find their own prey (projects). At a group meeting, everyone’s eyes light up when they hear about a new large-scale project coming up. Then, just like the Neanderthals discovering a mammoth on the other side of the mountain, we quickly get ready to embark on the hunt. The first brave person who jumps on the prey becomes the main person in charge of the case. This has become a customary practice. Note: Assignments may also be made based on location and areas of expertise. Use Half, Let Half Go I think it is best to travel as light as possible when you begin venturing into a new world. Don’t lean fully onto your previous experiences. When you start afresh in a new place, you may think, "I should make the most of my previous experiences." That’s because it’s always a bit scary to just jump in unarmed. You’re expected to be ready to fight immediately. I feel that the only thing that alleviates the anxiety of being in a new environment are past experiences. However, if you’re overloaded with your past experiences and values, you leave little room to assimilate new things. Past performance can no longer be changed, so holding on to them is like having roots grown out of your feet. When I first changed jobs, I had failed because of that.  It Will Be Helpful Even If You Forget So, let go of half. Things like persistence or style are the first to be let go of. Isn’t the right balance to make the most of half and let go of the other half? "Let go" means to "leave or forget for now", not to "lose or deny." Even if you usually don’t think about it, when needed, the drawer where you left them will pop open and help you when you need it. No one can steal that from you, so you don’t have to keep holding onto them tight. Thinking this way makes things easier to me. ![Experience Stock](/assets/blog/authors/hamatani/20231223/stock_of_experience.jpg =480x) Sorting through your own stock of experience Feelings of project managers In my generation, there was a saying that rabbits die when they are lonely, but project managers are just as vulnerable to loneliness. I was a project manager at my previous job, and I know that projects are going well, things are good, but when they don’t go well, isolation accelerates. Members, customers, owners, supervisors, finance department, sales department, service department, home country, subcontractors, partnerships, and some even mentioned family, pressure coming from all sides. Before you know it, there’s nowhere to run. Everyone at KINTO is kind, so I don’t think it goes here so far, but it can still become lonely and challenging. So I would like to support other project managers as well.  In most cases, projects are delayed because the baseline gets messy (e.g. scope, schedule, completion conditions). The best thing that I can do as someone from Pro G is to ensure that the project is in good shape before we hand it over. Things will work out in the end In my previous six years alone, I have worked on nearly 40 projects. That’s a lot of projects. There were times when things went wrong and then succeeded, ran over the budget, or moments where the future looked bleak, but all of them were eventually managed to work somehow. Even if you can’t reach the goal as cool as you imagined and you roll in breathless, a goal is still a goal. I believe even if you think that everything is in a dead end, there is always a way out somewhere. It is a wonder to me that I can think this way after having suffered so many painful experiences. Diversity is the norm I’ve worked with many different people. Nationalities varied, from Germany, Austria, France, Sweden, Czech Republic, England, India, Sri Lanka, Malaysia, Thailand, Indonesia, Singapore, Taiwan, to South Korea, among others (for some reason, I had no opportunities to work with people from the United States or China.) As for occupations, I’ve worked with people in software development, mechanical design, electrical work, plumbing, delivery and installation work, sales, finance, procurement, warehousing, general contractors, fire departments, automotive design and development, industrial robotics, and collaborating competitors, etc. It is natural for a variety of people to participate in a project, and it is worthwhile to have them participate because they are different. KINTO also has people with various occupations and habits, which is interesting to see. Making decisions is a project managers’ job My former supervisor said, "The job of a project manager is to decide," and I thought that was true. It is more serious to be blamed for not making a decision than to be blamed for making a mistake in judgment. Not deciding meant not doing a project manager’s job. You may seek advice from your supervisor, but never delegate your decision-making. It is the same as sitting in the driver’s seat and not actively steering the wheel yourself. It was deemed that the moment you leave the decision to someone else, was the time where you should leave the driver’s seat. It was a global company, but I think it was like that for every project manager in any country.  I was surprised because I spent my life with such values. Because project managers at KINTO/KINTO Technologies do not make decisions on their own, but with the agreement of all parties involved. I was quite puzzled by this difference, but in time I came to realize that this is also another way to conduct project management. If my previous job was conducting it in a more direct-management style of Decision and Order , KINTO would be more on the indirect management side, that could be called Facilitation and Agreement . I feel a sense of respect for my teammates as what KINTO is doing is highly advanced. It is simpler and easier to decide everything yourself. However, the approach of "advancing projects through consensus" may suffer from a lack of speed. There is also a risk of entrusting direct "judgments and orders" to an individual. I wonder if it is possible somehow to have the best of both worlds. Phone calls My flip phone served as one of my most used tools at work. Calling people as soon as ideas came to mind. Many outgoing and incoming calls were made, so it was common to have a daily history of dozens of calls. Since project members are scattered all over the country, the only way to reach them immediately was by phone. Even in the middle of the night, I didn’t mind calling. Of course, this is not the case at KINTO Technologies, where smart communication is predominantly facilitated through Slack. When I was surprised by such a natural thing, it felt as if I had slipped through time from the past. However, the one-minute call requirement can sometimes take up to 30 minutes for Slack interactions, so it is necessary to discern when to use each. I’m sure you are all practicing this. Embedded software knowledge I was deeply attached to it, but as expected, I had to let my preconceptions go. Since it is a web system, there is no real-time requirement (there is a request for response speed, but it is different), and it does not operate on Event-Mode-Action state machines. It is also different from a continuous control system like PID. It is essentially hardware-independent, so resource constraints are limited. Therefore, as sad as I was, I had to put my knowledge away in the back of a drawer. AWS SQS reminds me of the FIFO ring buffer made by hand, and it makes me nostalgic. Even so, once there is a point of contact between the edge area, such as software defined vehicle (SDV) and IoT, with KINTO’s cloud, it may come into play in the future. So, my drawer is buzzing with hope. Get the Overall Gist Because it is a different world, you will encounter unknown things just by walking. Understand the alpaca When you first see an alpaca, you may think it looks like a sheep with a long neck. Or some may think it’s like a white-haired camel. It is actually written in Chinese characters as "羊駱駝 (sheep camel)," so both points of view I think are valid. We have no choice but to honestly follow our natural upbringing and intuitive feeling. It is impossible to face an alpaca from the beginning and understand it from scratch. Therefore, I think that it is okay to understand something roughly at first, like "It’s like the XXX I know.  ![Observing alpacas](/assets/blog/authors/hamatani/20231223/alpaca_watchers.jpg =480x) You don’t need to observe that hard! It’s different, but it’s almost the same Rather than focusing on the differences, focusing on the same will help you get used to the other world. I often rely on my experience in embedded software and engine test equipment to make rough understandings. However, it is not about pretending to understand (deceiving others), but rather about feeling like you understand (stopping further investigation for now). When you actually work with knowledge, you have to face the fact that you "don’t understand" it. In other words, it is probably safe to leave it in the shallow end until then. Overall rather than correctness I find it more fitting to grasp the whole picture in a shallow and broad sense, even if it means making a few assumptions or leaving some parts undigested, rather than carefully and correctly understanding one thing at a time by scrutinizing the minutiae. Even if it is a collection of dots, you can somehow sense a hidden story as you look at it (like a constellation?). Or, it can be a clue to get to the place you want to climb (like bouldering?). In particular, I think that the roles of project manager and boundary spanner will help us to grasp that kind of holistic understanding.  To the point where you can dive What can be read from looking at the whole picture is only a hypothesis. 'Assuming it’s XXX, this way we can proceed.' Once a hypothesis comes to your mind, dig deeper if necessary. I have no desire to become a professional in that area (I can’t), but it’s enough to be able to talk to professionals. In my case, that’s how I get things done. A byproduct of the diving is the knowledge which was previously just dots becomes connected and forms a line. In this way, we connect the lines little by little to make a map. How to Spend the Rebellion Phase As you get used to your work, there are things that gradually come up. Discomfort and irritation I think everyone is humble until they get used to their new job, spending their time listening to their surroundings a little timid. As you get used to the work, there are things that will gradually spring up. 'Huh? Isn’t it strange how this company works?' This is a feeling of discomfort that arises from the gap between past and present experiences. It is a very valuable realization in itself, and if it works well, it may lead to some improvements. Maybe it’s just for me, but there is frustration involved. Feeling irritated somehow. Post-transition rebellion I personally call this feeling after changing jobs, "post-transition rebellion." It begins as early as around 3 months and lasts until about the second year. I’ve changed jobs three times, and it always comes. Even if the feeling of discomfort itself is okay, you can’t scatter irritation around you. In my case, I used one-on-one meetings to solve it. I talked to my direct supervisor, two ranks above superior, and three ranks above superior each. In order to be heard, I need to verbalize my discomfort, and in this process I first become objective, so I can calm down a little. It’s tempting to get a little fancy and organize it like a suggestion. When we talk frankly in this way, the sense of discomfort gradually disappears. Sense of distance: two ranks above is the vice president, and three ranks above is the president. Flatness is the appeal of KINTO Technologies. If you sort out the sense of discomfort, some of it can be put into your mission. ![Adult’s rebellion](/assets/blog/authors/hamatani/20231223/rebellious_adult.jpg =480x) generated "Adult’s Rebellion" (the same person on the first page appeared!) Conclusion Perhaps now is the best time to be proactive about new technology and knowledge. When the old drawer opens It is interesting to listen to people in various positions within the company, as well as to attend external events and talk with people from other companies. As I get stimulated and reconstruct my thinking, my rusty old drawer slid open, allowing for a potential chemical reaction between new and old ideas. I wrote bouldering as an example; as you climb just a little bit, the view changes and you start to realize things like what you’ll be able to reach next, or whether you should reconsider because it’s different from what you thought. If I accept such changes honestly, I can enjoy the next change. Conversely, when I felt distressed, it was usually when I was trying to stay there. That’s how we survive "I used to use the Tiger hand spin calculator. " or " I can read punch cards. " When I got a job as a new graduate, those experienced engineers were still active. Both "cloud-native modern applications" and "LLM/Generative AI" will eventually find their way onto the computer history as dead technologies. Technology will change, the roles required will change, too. I can’t imagine what I’ll be doing then, but I hope I will be able to survive tenaciously, by replacing half of myself as I go along. ![Computer History](/assets/blog/authors/hamatani/20231223/history_of_computer.jpg =480x) Former state-of-the-art technology and apples lined up
アバター
Introduction Hello, this is Rasel from the Mobile Application Development group of KINTO Technologies. Currently I’m working in my route Android application. my route is an Outing Multimodal application that will assist you in gathering information about places to visit, exploring various locations on the map, buying digital tickets, making reservations, handling payments for rides etc. As you already know, mobile applications became an essential part of our daily lives. Developers primarily create applications targeting Android and iOS platforms separately, incurring in double the cost for both platforms. To reduce those development costs, various Cross-Platform application development frameworks like React Native, Flutter etc. have emerged. But there is always complains about the performance of these cross-platform apps. They don’t offer performance like natively developed apps. Also, there is always issues and sometimes we have to wait longer to get support of new features from framework developers whenever platform-specific new features are released by Android and iOS. Here comes Kotlin Multiplatform (KMP) to the rescue, which offers native-like performance along with the freedom to choose how much code to share between platforms. In KMP, the Android application is fully native as it is being developed with Kotlin, Android's native first language, so there is almost no performance issues. The iOS part uses Kotlin/Native which offers a performance that is closer to natively developed apps when compared to any other frameworks. Today, in this article, we are going to show you how to integrate SwiftUI code along with Compose Multiplatform in KMP. KMP (Also known as KMM for mobile platforms) gives you the freedom to choose how much code you want to share between platforms, and how much code you want to implement natively. It integrates seamlessly with platform codes. Previously it was possible to only share business logic between platforms, but now you can also share UI codes too! Sharing UI codes became possible with Compose Multiplatform . You can read our previous article on this topic below to better understand the usage of Kotlin Multiplatform and Compose Multiplatform in mobile application development. Kotlin Multiplatform Mobile (KMM)を使ったモバイルアプリ開発 Kotlin Multiplatform Mobile(KMM)およびCompose Multiplatformを使用したモバイルアプリケーションの開発 So, let’s get started~ Overview To demonstrate the SwiftUI integration into Compose Multiplatform, we will use a very simple Gemini Chat application. We will develop the app with KMP which will use Compose Multiplatform for UI development. And will use Google’s Gemini Pro API for replying to user’s query in chat. For demonstration purposes, also to keep it simple, we are going to use the free version of API so only text messages are allowed. How Compose and SwiftUI works together First things first. Let's create a KMP project using Jetbrain's Kotlin Multiplatform Wizard which comes with necessary basic setup of KMP with Compose Multiplatform and some initial SwiftUI code. ![Kotlin Multiplatform Wizard](/assets/blog/authors/ahsan_rasel/kmp_wizard.png =450x) You can also create the project using Android Studio IDE by installing Kotlin Multiplatform Mobile plugin into it. We will try to demonstrate how Compose and SwiftUI works together. To incorporate our Composable code into iOS, we have to wrap our Composable code inside ComposeUIViewController which returns UIViewController from UIKit and can contain compose code inside it as content parameter. For example: // MainViewController.kt fun ComposeEntryPoint(): UIViewController { return ComposeUIViewController { Column( modifier = Modifier.fillMaxSize(), horizontalAlignment = Alignment.CenterHorizontally, verticalArrangement = Arrangement.Center ) { Text(text = "Hello from Compose") } } } Then we will call this function from iOS side. For that, we need a structure which represents Compose code in SwiftUI. Below code will convert our UIViewController code of shared module into a SwiftUI view: // ComposeViewControllerRepresentable.swift struct ComposeViewControllerRepresentable : UIViewControllerRepresentable { func updateUIViewController(_ uiViewController: UIViewControllerType, context: Context) {} func makeUIViewController(context: Context) -> some UIViewController { return MainViewControllerKt.ComposeEntryPoint() } } Here, take a closer look to the name of MainViewControllerKt.ComposeEntryPoint() . This will be our generated code from Kotlin. So, it might be different according to your file name and code inside shared module. Suppose, if your file name in shared module is Main.ios.kt and your UIViewController returning function name is ComposeEntryPoint() , then you have to call it like Main_iosKt.ComposeEntryPoint() . So it will differ according to your code. Now we will instantiate this ComposeViewControllerRepresentable from inside of our ContentView() code and we are good to go. // ContentView.swift struct ContentView: View { var body: some View { ComposeViewControllerRepresentable() .ignoresSafeArea(.all) } } As you can see in the code, you can use this Compose code anywhere inside SwiftUI and control it’s size as you want from within SwiftUI. The UI will look like as: ![Hello from Swift](/assets/blog/authors/ahsan_rasel/swiftui_compose_1.png =250x) If you want to integrate SwiftUI code inside compose, you have to wrap it with UIView , as you can't write SwiftUI code directly in Kotlin, you have to write it in Swift and pass it to a Kotlin function. To implement it, let's add an argument of type UIView to our ComposeEntryPoint() function. // MainViewController.kt fun ComposeEntryPoint(createUIView: () -> UIView): UIViewController { return ComposeUIViewController { Column( modifier = Modifier.fillMaxSize(), horizontalAlignment = Alignment.CenterHorizontally, verticalArrangement = Arrangement.Center ) { UIKitView( factory = createUIView, modifier = Modifier.fillMaxWidth().height(500.dp), ) } } } And pass createUIView to our Swift code as below: // ComposeViewControllerRepresentable.swift struct ComposeViewControllerRepresentable : UIViewControllerRepresentable { func updateUIViewController(_ uiViewController: UIViewControllerType, context: Context) {} func makeUIViewController(context: Context) -> some UIViewController { return MainViewControllerKt.ComposeEntryPoint(createUIView: { () -> UIView in UIView() }) } } Now, if you want to add other Views, create an parent wrapper UIView like below: // ComposeViewControllerRepresentable.swift private class SwiftUIInUIView<Content: View>: UIView { init(content: Content) { super.init(frame: CGRect()) let hostingController = UIHostingController(rootView: content) hostingController.view.translatesAutoresizingMaskIntoConstraints = false addSubview(hostingController.view) NSLayoutConstraint.activate([ hostingController.view.topAnchor.constraint(equalTo: topAnchor), hostingController.view.leadingAnchor.constraint(equalTo: leadingAnchor), hostingController.view.trailingAnchor.constraint(equalTo: trailingAnchor), hostingController.view.bottomAnchor.constraint(equalTo: bottomAnchor) ]) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } Then add it into your ComposeViewControllerRepresentable and add Views according to your needs: // ComposeViewControllerRepresentable.swift func makeUIViewController(context: Context) -> some UIViewController { return MainViewControllerKt.ComposeEntryPoint(createUIView: { () -> UIView in SwiftUIInUIView(content: VStack { Text("Hello from SwiftUI") Image(systemName: "moon.stars") .resizable() .frame(width: 200, height: 200) }) }) } The output will look like this: ![Hello from Swift with Image](/assets/blog/authors/ahsan_rasel/swiftui_compose_2.png =250x) In this way, you can add as much SwiftUI code as you want into your shared Composable codes. And if you want to integrate UIKit code inside Compose, you don't have to write any intermediate code yourself. You can use UIKitView() composable function offered by Compose Multiplatform and add your UIKit code inside it directly: // MainViewController.kt UIKitView( modifier = Modifier.fillMaxWidth().height(350.dp), factory = { MKMapView() } ) This code integrates iOS native Map screen inside compose. Implementation of Gemini Chat app Now, let’s integrate our Compose code inside SwiftUI and proceed with the implementation of Gemini Chat app. We will implement a basic chat UI using LazyColumn of Jetpack Compose. As our main focus is integrating SwiftUI inside Compose Multiplatform, we are ignoring implementation details of other parts of the application like Compose part or data and logic part. We are using Ktor networking library to implement Gemini Pro API. To know more about Ktor implementation, visit Creating a cross-platform mobile application page. In this project, we are implementing our full UI with Compose Multiplatform. We will use SwiftUI just for input field of iOS app as TextField of Compose Multiplatform has some performance glitch in iOS side. Let’s put our Compose code inside ComposeEntryPoint() function. These codes contains Chat UI with TopAppBar and list of messages. This also has conditional implementation of input field which will be used for Android app. // MainViewController.kt fun ComposeEntryPoint(): UIViewController = ComposeUIViewController { Column( Modifier .fillMaxSize() .windowInsetsPadding(WindowInsets.systemBars), horizontalAlignment = Alignment.CenterHorizontally ) { ChatApp(displayTextField = false) } } We passed false to displayTextField so that Compose input field will not be active for our iOS version of the app. And the value of displayTextField will be true when we call dis ChatApp() composable function from Android implementation side as there is no performance issue of TextField in Android side (It’s native UI component for Android). Now come to our Swift code and implement a input field with SwiftUI: // TextInputView.swift struct TextInputView: View { @Binding var inputText: String @FocusState private var isFocused: Bool var body: some View { VStack { Spacer() HStack { TextField("Type message...", text: $inputText, axis: .vertical) .focused($isFocused) .lineLimit(3) if (!inputText.isEmpty) { Button { sendMessage(inputText) isFocused = false inputText = "" } label: { Image(systemName: "arrow.up.circle.fill") .tint(Color(red: 0.671, green: 0.365, blue: 0.792)) } } } .padding(15) .background(RoundedRectangle(cornerRadius: 200).fill(.white).opacity(0.95)) .padding(15) } } } And then return back to our ContentView structure and modify it like below: // ContentView.swift struct ContentView: View { @State private var inputText = "" var body: some View { ZStack { Color("TopGradient") .ignoresSafeArea() ComposeViewControllerRepresentable() TextInputView(inputText: $inputText) } .onTapGesture { // Hide keyboard on tap outside of TextField UIApplication.shared.sendAction(#selector(UIResponder.resignFirstResponder), to: nil, from: nil, for: nil) } } } Here, we added a ZStack and inside it we added our TopGradient color and also ignoresSafeArea() modifier so that our status bar color also matches rest of the our UI. Then we added our shared Compose code wrapper ComposeViewControllerRepresentable which implemented our main Chat UI. Then we also added our SwiftUI view named TextInputView() which will give smooth performance to the user in iOS app too with iOS native code. The final UI will look like this: Gemini Chat iOS Gemini Chat Android ![Gemini Chat iOS](/assets/blog/authors/ahsan_rasel/swiftui_compose_ios.png =300x) ![Gemini Chat Android](/assets/blog/authors/ahsan_rasel/swiftui_compose_android.png =300x) Here, the whole UI code of this ChatApp is shared between Android and iOS with Compose Multiplatform of KMP and only input field for iOS is integrated natively with SwiftUI. The complete source code for this project is available on GitHub as a public repository. GitHub Repository: SwiftUI in Compose Multiplatform of KMP Conclusion In this way, we can overcome our performance issues of Cross-Platform app with Kotlin Multiplatform and Compose Multiplatform while giving native-like feels and look to the user. We can also reduce development cost as we can share codes between platforms as much as we want. Compose Multiplatform also enables to share code with Desktop applications too. So single codebase can be used in mobile platforms as well as Desktop apps. Additionally, web support is in progress which will give you more opportunities to share codebase between platforms. Another big advantage of Kotlin Multiplatform (KMP) is, you can always opt out to your native development without wasting your code. You can use KMP code AS-IS in your Android application as it’s native for Android, and opt-out to develop iOS app separately. Also, reusing the same SwiftUI codes you have already implemented in KMP is possible. This framework not only gives you high-performance applications, but also the freedom to choose between percentages of code to share and to opt-out into native development, anytime you want. That's all for today. Stay tuned to updates on the KINTO Technologies Tech Blog for more exciting articles. Happy Coding!
アバター