TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

969

This article is the entry for day 13 in the KINTO Technologies Advent Calendar 2024 🎅🎄 Introduction Hello. I am ITOYU, a front-end developer in the New Car Subscription Development Group of the KINTO ONE Development Division. Engineers, are you using GitHub? At KINTO Technologies we do too! We rely on GitHub's Pull Request feature to review and merge code. When merging, you have several options to choose from—did you know that? In this article, I'll explain the differences between these options and share some pitfalls I've encountered along the way. Topics Covered GitHub Pull Request merge options: Create a merge commit Squash and merge Rebase and merge Common Pitfalls: Rebase and merge is not the same as git rebase . Your last selected merge option becomes the default for the next merge. Prerequisites A develop branch and a feature branch exist. You create a feature branch by branching off from develop , make changes, and then submit a Pull Request to merge it back into develop . The commit histories for the develop and feature branches are as follows: Commit history of the develop branch Commit history of the feature branch The Pull Request Merge Options on GitHub. Create a merge commit The Create a merge commit option merges the commits from the feature branch into the develop branch while preserving their hash values. It also generates a new merge commit. After merging, the commit history will appear as follows: Commit history of the develop branch after the merge You can see that the hash values of the original commits are preserved and that a new merge commit is created, maintaining a clear record of the merge. Features The hash value of the commit history from the merge source is retained. A merge commit is created, preserving a record of the merge in the commit history. Use cases When you want to keep multiple commits as is When you want to keep a clear merge history Squash and merge The Squash and merge option combines all all commits from the feature branch into a single commit before merging it into the develop branch. After merging, the commit history will appear as follows: Commit history of develop branch after merge Although multiple commits existed in the feature branch, they are consolidated into a single commit in the develop branch. Features Combines all commits from the merge source into a single commit. Use cases When you want to keep the commit history simple. When you need to consolidate multiple small changes into one commit. Rebase and merge The Rebase and merge option moves the commits from the feature branch right after the latest commit in the develop branch, and merges them into the develop branch. Unlike squash, the commit history remains intact. Commit history of develop branch after merge As you can see, no merge commit is created. Features Preserves the commit history from the feature branch without modifications. No merge commit is created, keeping the history clean. Unlike the original commit hash values, new commits are created during the rebase. Use cases When you want to keep the commit history intact without creating a merge commit. When you prefer a structured commit history by rebasing. Common Pitfalls: I've explained each option above, but now I'll highlight some pitfalls I've encounteres along the way. Rebase and merge and git rebase are not the same In a medium-sized project, our team was collaborating on a development branch, and the develop branch of the source branch had been updated. I considered using git rebase to clean up the commit history of our project's branches. However, this approach would require a force push , which I wanted to avoid on the branches we were all working on. Instead, I thought the Rebase and merge option in GitHub’s Pull Request feature might be a safer alternative. I believed this method would keep the commit history clean without requiring additional local work. After merging the Pull Request from the feature branch into the develop branch using the Rebase and merge option, I checked for any differences or deltas. There were a lot of differences. At first glance, the commit histories of both the develop and feature branches seemed identical, but the hash values were different. This occurs because Rebase and merge generates a new commit hash rather than preserving the hash value from the source branch. I realized that Rebase and merge and git rebase behave differently, and I shouldn’t expect them to produce the same results. The option you selected last time will be set as the default for the next merge. This one is more of a careless mistake than a pitfall, but it’s important to highlight. My team typically uses Squash and merge to keep a clean commit history in the working branch. After my previous unsuccessful attempt with Rebase and merge , I returned to my usual workflow. Once the Pull Request I submitted was approved, I merged it as usual. However, something was off. For some reason, Rebase and merge was executed instead of Squash and merge … It turns out that the Pull Request merge options you select last are automatically set as the default for the next merge. This might seem obvious after you think about it, but I didn’t realize it at first. I learned to be more cautious when changing merge options, as the selected option will carry over to the next Pull Request, potentially affecting your workflow if you’re not paying attention. Conclusion When merging a GitHub Pull Request, selecting the appropriate option is crucial based on the characteristics of the merge and your specific needs. Here’s how I differentiate the options: Create a merge commit : I use this option when I want to retain the commit hash from the feature branch and maintain a clear record of the merge history. This makes it easier to trace which branches were merged and when. Squash and merge : I prefer this option when I want to combine my working commits into a single commit and maintain a clean, simple commit history. This helps combine smaller commits into one, making the commit history more concise and easier to read. Rebase and merge : I choose this when I want to keep my commit history linear without creating merge commits. This makes it easier to follow the flow of changes and keeps the history clean. Additionally, always double-check the currently selected merge option before merging a Pull Request to avoid any unexpected issues.
アバター
KTC の Engineering Office について 2025年1月、KINTOテクノロジーズ(以下 KTC )では新たに「Engineering Office」を設立しました。 この記事では社内周知も兼ねて Engineering Office の取り組みについて紹介します。 やりたい、やっていること 我々のシンプルな目的は「KINTO テクノロジーズの開発組織としての力を高めること」です。これを中長期で実現していけるよう、新しいケイパビリティやカルチャーを獲得するためのさまざまな働きかけをしています。 副社長 直轄グループとして経営課題と常に同期し、KTC に必要な取り組みを仮説立てながら、私を含む所属メンバーが それぞれの経験や専門性を活かして各種の横軸活動に取り組んでいる のが現状です。 スタンス、こころがけ どこの組織でも隣の青い芝を見て、無いものばかりに目がついてしまいます。しかし新たな可能性を開くためには、これまで培ってきた強みにもしっかりと目を向けて活かすことが必要です。 「これまでの強みにこれを足したり、引いたり、変えたりすれば、もっと強くなれる」、そういう発見をしないとなかなか前には進めません。現状を正しく理解し、コンテキストを汲んだ活動をすることが重要です。 KTC は既に350名を超える規模であり、既存の業務や体制が確立されているからこそ、変化を生み出すことも容易ではありません。誰も自分たちにとっての正解なんて分からない不確実で曖昧な状況だからこそ、安易に結論を急がず本当に必要なものを見極める粘り強さも心がけたいところです。 たぶん、やらないこと 事業施策、開発案件の企画立案、推進・実行 開発案件のプロジェクトマネジメントおよびプログラムマネジメント 採用や広報、社内活性化など Engineering Office は開発組織としての力を高めるための企画立案、推進を行いますが、他社の同名組織の事例によくある採用、育成、評価、広報などを直接担うことはありません。今は企画を進めるなかで、要所でそれらの専任グループと協力しあっています。 活動例① 注力テーマの取り組みを加速させる企画 現在の KTC は優れた能力をもつメンバーが集まりつつある一方で、内製開発部隊としてより高い成果を実現するためにはチームや組織としての成熟を進める必要があります。 副社長と議論を重ねる中で 2025年に我々が注力して取り組むべきテーマ の言語化に関わらせてもらいました。AIファーストは 専任プロジェクト があるので、Engineering Office ではそれ以外の3テーマ... リリースファースト ユーザーファースト 組織インテンシティ これらについて ①組織全体におけるリテラシー向上 と ②具体事例の創出と行動変容 の2軸を念頭に、社内の有志や各グループの取り組みに対する支援を企画しています。 活動例② 開発チームのプロセス/コミュニケーション支援 メンバーの Y.Naito さんの SPI ( Software Process Improvement ) に関する専門知識や経験を活かしながら、社内の開発チームに対して支援活動をしています。 具体的には、チームふりかえりのファシリテーションや開発インフラ (JIRAやFindy Team+など) の運用アシスト、プロセス改善に必要なナレッジの勉強会、メンタリングなどを展開しています。 チーム一丸となって協力し合うことで個々のメンバーが単独では成し得ない成果を創出するために、コミュニケーション、ツール、プロセス、そしてエンパワーメントの向上・改善をリードしていく取り組みです。 また、個人や特定のチームに蓄積されていたノウハウや優れたプラクティスを発掘し、他のチームへ横展開することで組織内の知識を拡大・深化させ、最終的には組織全体の技術力を高める活動を推進します。 活動例③ 個人の成長と貢献が両輪で回る環境づくり 副社長から技術の会社として 「エンジニアやデザイナー等の専門職がスペシャリストとして活躍することが、ちゃんと会社への貢献につながって、それが評価されるようにしたい」 とリクエストをもらっています。議論を始めたばかりで確からしいことはまだ何も書けないのですが、このリクエストを実現していくため人事と協力して各部長らと相談を始めています。 ソフトウェア領域の開発力・技術力でグループに貢献する会社で在り続けるためには、キャリアパスの整備やチャレンジ機会を増やす等の取り組みを通して、メンバーが成長を続けられる環境を整えることが重要だと考えています。 自分たちで自分たちの仕事を面白くしていきましょう!(求人) まだ業務を体系化できていないので表向きには求人をオープンしていませんが、自分のテーマや信念をもとに使命を開拓していくぞ!という心意気の方であればきっと楽しめると思います。 自分たちで自分たちの仕事を面白くしたり、意義のあるものにしたりしていきましょう💪 Requirements 高い自由度を活かして、自律的に動き回れる 手探りの状況でも、試行錯誤しながら前進できる 不確実な状況や曖昧な課題に、結論を焦らず粘り強く向き合える 周囲を巻き込みながら、行動の変容を生み出せる もしご興味を持たれた方は オープンポジション求人 や、個人的につながりのある方は各種 DM 等でお問い合わせください (・∋・)ノシ https://hrmos.co/pages/kinto-technologies/jobs/0000127 ほかにも KINTO テクノロジーズでは、さまざまな部署・職種で一緒に働ける仲間を募集しています! 詳しくは 採用情報 をご覧ください
アバター
What are UUIDs and which version should you use? Recently, we had to respond to an incident: a service was down because of duplicate keys in its database. My team and I were scratching our heads because these were UUIDs - you know, those supposedly 'unique' identifiers. How could we have duplicates? Turns out the issues were caused by the service trying to add the same event twice, not that the same UUID was generated twice. This incident got me thinking about UUIDs. What they are? How are they generated? What are their use cases? And most importantly, which version should you use? What is a UUID? UUIDs are usually used to provide an ID for a resource. UUID stands for "Universally Unique IDentifier". Looking at the name, there seems to be strong expectations of uniqueness for the values being generated. That's with good reasons: even if we generated a huge amount of UUID, for instance a few quadrillions UUID (it's what comes after trillions), there is a 99,999% chance of them being unique. If you are interested in the math behind these odds, I recommend reading this really great article . UUIDs are 'practically unique' rather than 'guaranteed unique.' The probability of a collision is so small that for most applications, it's more likely that your hardware will fail or that a cosmic ray will flip a bit in your machine's memory than it is that you'll experience a UUID collision. However, it's worth noting that these probabilities assume proper random number generation. If your random number generator is flawed or predictable, the actual probability of collisions can be much higher. I'll explain bit more later in the article. If you work in software, you probably already know what a UUID looks like, but just in case: UUIDs are 128 bits wide and composed of 5 parts, separated by hyphens. They are usually represented using hexadecimal digits, looking something like this: ccba8c00-cbed-11ef-ad79-1da827afd7cd 74febad9-d652-4f6b-901a-0246562e13a8 1efcbedf-13bf-61e0-8fb8-fe3899c4f6f1 01943a0e-dd73-72fd-81ad-0af7ce19104b But wait! These UUID were actually generated using different version of UUID! In the list of UUID above, they are generated using the following versions in this order: version 1, version 4, version 6, version 7. Try to figure out where the version is indicated in the UUID. Hint: it's somewhere in the middle. Hopefully you noticed that the version of the UUID is indicated in the first character of the third part of the UUID, right in the middle of the UUID. There is also a variant being indicated in the first character of the fourth part. The version is used to indicate how the UUID was generated and the variant is used to indicate the layout of the UUID, but you probably won't need to worry about the variant, the version matters the most. So as we discussed, there are multiple versions of UUIDs. Aside the version indicator that we discovered earlier, what are the differences between each version? Are they all equally able to generate unique UUIDs? Also, why would you use one version over another? Obviously, you should use the latest and greatest version of UUID, right? Very good question! Let's take a look at the different versions of UUID. Version 1 and Version 6 Version 1 and 6 UUID are generated using the current time and the MAC address of the computer that generated the UUID. The timestamp part is located at the front of the UUID, and may include random bits or incremented counters depending on the computer's CPU. The MAC address part is located at the end, so if you use the same computer that part should never change. Interestingly because the MAC address can be retrieved from a UUID, there is a privacy risk when generating a UUID version 1 or 6. But that's also one of the pro of this version of UUID: two computers cannot generate the same UUID. That makes these versions useful in distributed systems where global uniqueness is needed. The difference between version 1 and 6 is the order in which parts of the timestamps is used in the UUID. Unlike version 1, version 6 UUID can be sorted chronologically, which can be useful for ordering in databases. As version 1 and 6 uses predictable elements (the time of generation and the mac address), it is possible to guess a UUID, which makes it unsuitable for uses that requires the UUIDs to remain secret. Version 2 Version 2 is similar to Version 1 in that both use a timestamp and the MAC address of the computer generating UUIDs. However, Version 2 also uses additional identifier data, namely the POSIX UID or GID. This makes Version 2 less random and use less of the timestamp than Version 1 and 6. As a consequence, there is a limited number of UUID v2 that can be generated at a given time, making it less desirable for most uses. It is rarely used and usually not supported by most libraries. It's also not documented in the UUID specification. Version 3 and 5 Version 3 and 5 are quite different from the other UUID. While the other versions aim to be random, Version 3 and Version 5 aim to be deterministic. What does that mean? They both use hashing algorithms to generate the UUID, making the UUID reproducible. There is no randomness or timestamp used to produce the UUID, a given input should always produce the same UUID. Version 3 uses the MD5 hashing algorithm while Version 5 uses SHA1. These versions are particularly useful when you need to generate the same UUID repeatedly from the same input data. For example, imagine you're creating UUIDs for users based on their email addresses - you'd want the same email to always generate the same UUID, even across different servers or times. Another good example would be when you need to generate a primary key based on some data to avoid duplicates, but using the data itself as the primary key is not a good option. When choosing between Version 3 or Version 5, you should keep in mind that SHA1 is a little more secure but also more compute intensive. If that is a concern for your use case, you might want to use Version 3 to reduce usage of compute resource but most of the time you should pick Version 5, as it is much more secure. It's also more likely that you will experience a collision with MD5 than with SHA1, but the probability is still very low. Version 4 Version 4 is the most widely used version of UUID. It uses random bits to generate the UUID, making them unique and unpredictable. It relies heavily on random number generation, but not all random number generators are actually capable of generating true random numbers. Shocking, I know. Many programming languages use what's called a Pseudo-Random Number Generator (PRNG), which is fine most of the time, but for UUID generation you'll want to ensure your system uses a Cryptographically Secure PRNG (CSPRNG). Why? A regular PRNG might be predictable if someone analyzes enough of its output. CSPRNGs, on the other hand are specifically designed to make predicting their output practically impossible, even if an attacker knows all previously generated values. Most modern UUID libraries use CSPRNGs by default but it's worth checking just to be sure. Like for the other version, the only predictable part is the version indicator, so you could try impressing your friends by guessing that part. They are great for most usage, generally when you need to generate a large amount of UUID and don't need to sort them or reproduce them later. They are often used as keys in databases. Version 7 Version 7 is designed to be a chronologically sortable variant to Version 4. Like Version 4, it uses random bits but includes a timestamp, making the UUID sortable and unique. They can be a great alternative to Version 4 where you want uniqueness, but want to be able to sort them by creation time. Version 7 also uses Epoch time for its timestamp, while Version 1 and 6 use the number of 100-nanosecond intervals since 15 October 1582. This makes Version 7 a little easier to work with. Version 8 Version 8 is a bit special, because it is custom. Vendors can implement it how they wish. You can implement it yourself, and you just need to respect the UUID version placed in the third part of the UUID. You probably will never need to use it. So, what should you use? For most people, it will be version 4. It has the greatest guarantee of uniqueness and is relatively secure (as long as the random number generator is not predictable). If you want to be able to sort you UUID by creation time, you can reach for version 7 or even version 6 as long as you don't have any privacy concern with leaking your MAC address. For some cases, version 3 and 5 are useful, but for most applications their use is limited. Database keys? Maybe you've seen discussion about using UUID for database key, and there are a few facts that you should keep in mind if you are thinking of using UUID for your database keys: UUIDs are large, they take up 128 bits. If you do not plan to store large amounts of data, the extra space taken for your UUID might be significant. Alternatively, a 32 bits auto incremented integer should give you about 2147483647 rows, and if that's not enough a 64 bits BIGINT goes up to 18446744073709551615. That should be enough for most use cases. For some databases, if you use UUID for your keys, insert performance may suffer. If insert performance is a concern, you might want to consider using auto incremented integer, or at least test the performance of your database with UUID. UUID make it easier to migrate data, as you will have collision when using an auto incrementing integer but probably won't have that issue with UUID. Even if some UUID are sortable, they are not easy to read. Looking at two UUID, it's quite hard to know which one came first. That's quite minor but it's something to keep in mind. Most database have some kind of module or function to generate UUID, so you can check the documentation of your database to see how to generate UUID. They will probably tell you there if there are some performance issue or special consideration to take into account when using UUID. Conclusion Hopefully you now understand UUIDs and their different versions a bit better than before reading this article. Version 4 UUIDs remain the go-to choice for most applications. They have strong uniqueness guarantees and unpredictability, which is probably what you want from UUIDs. They're mostly used for database keys, distributed systems, and any scenario where you need globally unique identifiers without coordination. Version 7 is a good alternative when chronological sorting is desirable, as it offers a good balance between randomness and sortability. Version 1 and 6 are useful in distributed systems where global uniqueness is needed, but they come with privacy concerns due to the inclusion of MAC addresses. Version 3 and 5 are useful when you need to reproduce the UUID from a given input, but keep in mind that MD5 is not as secure as SHA1. If you plan to use UUID in your systems, keep in mind these factors when choosing UUID version choice: Your uniqueness requirements Whether chronological sorting is needed Privacy concerns (especially if using versions that include MAC addresses) Storage space constraints (maybe you don't need 128 bits for your keys) While UUID collisions are theoretically possible, they're so improbable that they shouldn't be a primary concern in your system design - as long as you're using a proper implementation with a cryptographically secure random number generator. If you do encounter a UUID collision (congratulations on defying astronomical odds!), it's more likely due to an application logic issue, like duplicate event processing, rather than an actual UUID generation collision. In such cases, focus on investigating your application's handling of unique constraints rather than questioning the UUID generation itself.
アバター
Hello, this is HOKA from the Manabi-no-Michi-no-Eki (Learning Roadside Station) team. The Manabi-no-Michi-no-Eki team functioned a bit like a club activity, where we all worked on both our main jobs and this project during office hours. But this fall (September?), we officially joined the Developer Relations Group! For more details, check out this blog ↓↓ https://blog.kinto-technologies.com/posts/2024-12-03-the-next-goal/ Since we joined the Development Relations Group, we figured, why not contribute to the December Advent Calendar from our team as well? We had previously written a Tech Blog together, so we set up the MTG in a similarly casual manner this time as well, and Nakanishi, the founder of the KINTO Technologies Tech Blog, said with enthusiasm, "Let's write 15 articles." "Wait… was that really the plan? " I wasn’t so sure, so I asked him what the 15 topics were. Here’s the list: 10 about podcast episodes 1 about manabyi 1 about our portal One article reflecting on our past year Our future plans in the Development Relations (DevRel) Group Looking at this list, I couldn’t help but wonder… "Do we really have that much to write about? Is this even interesting?" "No, no, let's just keep writing. For example, you wrote a blog in the spring. You can write about the reaction within the company, the change in the atmosphere, etc." said Nakanishi-san, full of confidence. Since I have some PR experience, I figured I could write something, so I reluctantly said, "Okay, I’ll give it a shot". Just as I was about to wrap things up, Kin-chan spoke up. "HOKA-san, you’re not really convinced, are you? Are you forcing yourself?" He saw right through me. Since it's work, I thought it was natural that there would be things I wasn't fully happy with and things I would have to force myself to do, so I answered honestly, "YES." Then I asked, "If we don’t even have enough material to write about, why do this? " I thought our current activities are not enough to be shared, and honestly, I didn't find them all that interesting to read either (full disclosure here.) The conversation went like the above: Nakanishi: I want you to write as if you were talking to your past-self before joining KTC. If you learned the company had this kind of atmosphere, you might actually want to join, right? Hoka: Hmm. Yeah, nope. Doesn’t sound like something I’d want to read at all... Nakanishi: Honestly, a Tech Blog only needs to really hit home for one or two people a year. Hoka: "!?" Nakanishi: Honestly, there are articles on Tech Blog that describe common things that happen in any company. But without it, the people outside KTC wouldn’t have any idea what’s actually going on here. So it doesn’t have to be some amazing discovery. Just writing about what’s happening at the company is enough. Because when people read it, they’ll simply see, "Oh, so that’s what’s been going on". In other words, all you need to do is document what you’ve done." Hoka: "!?!?!?!?!?!?!?" Nakanishi: "Just write what you did, that’s it". That makes it easy, and anyone can. And even if each post is just a small part of what happens at the company, if everyone keeps doing it, all those pieces will come together to show what KTC is really like." HOKA: "Got it. That makes so much sense (imagining a patchwork diagram in my head). " Kin: That’s Nakanishi-san’s strategy, isn’t it? It's what differentiates our Tech Blog from others. Also HOKA-san's concerns are the same as those engineers who have not written Tech Blog articles yet. I feel relieved to have heard HOKA-sans concerns." I spent 10 years in corporate PR, where my writing goal was always to strip away personal feelings and concisely convey business performance and brand image. That’s because my readers were always time-strapped reporters and editors. But today was different—I got to experience firsthand how engineers communicate in a company built around engineers, and I learned a lot from it. Honestly, it was one of the most eye-opening moments since joining the company, so I had to put it into writing rightaway. ##Summary Our Tech Blog serves as a record of what happened. The Learning Road Station Team is an amazing team where people can speak honestly. They are wonderful teammates who support each other through uncertainty. Everyone involved is continuously learning.
アバター
Migrate from EncryptedSharedPreferences to Tink + DataStore Hello. My name is Osugi, and I’m part of the Toyota Woven City Payment Development Group. Our team develops the payment system used in Woven by Toyota ’s Toyota Woven City , covering a wide range of payment-related functions, from backend to Web frontend and mobile applications. In this post, I've summarized the story of how I replaced an Android app that implemented EncryptedSharedPreferences, which has now been officially deprecated. Introduction EncryptedSharedPreferences has been deprecated since v1.1.0-alpha07, with an official recommendation to replace it with Android KeyStore . ![Updates of security-crypto](/assets/blog/authors/osugi/20250616/security-crypto.png =600x) Investigating Alternatives to EncryptedSharedPreferences With EncryptedSharedPreferences being deprecated, we began exploring options for both data persistence and encryption. Choosing a Data Persistence Method In our app's use case, EncryptedSharedPreferences had only been used to store configuration data, so using SharedPreferences alone would have been sufficient. However, since we had this opportunity to refactor, we decided to follow the official recommendation and adopted DataStore as our persistence means. Choosing an Encryption Library Following the official recommendation mentioned earlier, we initially planned to use Android KeyStore . However, we found that not only are there functional limitations depending on the API level, but achieving a high level of security using StrongBox also depends on the device specifications. This meant that simply implementing it in code might not guarantee the intended level of security. In our case, since the app was designed to run on devices managed via MDM, and we had already selected devices that support StrongBox, this limitation was not an issue. During our research on encryption libraries, we also came across Tink , a cryptographic library provided by Google. Looking at Tink’s repository , we found that it uses Android KeyStore to store its master key. To compare Android KeyStore and Tink in terms of maintainability and performance, we created a sample implementation. Comparing Encryption Library Implementations Below is a summary of sample code using Android KeyStore with StrongBox and TEE, as well as using Tink. We found that both were relatively easy to implement at a basic level. That said, Android KeyStore has some challenges: Key generation settings must be adjusted depending on the encryption algorithm Developers are responsible for managing initialization Vectors (IVs) There are very few sample implementations available Tink, on the other hand, wraps these aspects nicely, making implementation smoother. Sample Implementation of Encryption and Decryption Using Android KeyStore class AndroidKeyStoreClient( private val useStrongKeyBox: Boolean = false ) { private val keyStoreAlias = "key_store_alias" private val KEY_STORE_PROVIDER = "AndroidKeyStore" private val keyStore by lazy { KeyStore.getInstance(KEY_STORE_PROVIDER).apply { load(null) } } private val cipher by lazy { Cipher.getInstance("AES/GCM/NoPadding") } private fun generateSecretKey(): SecretKey { val keyStore = keyStore.getEntry(keyStoreAlias, null) if (keyStore != null) { return (keyStore as KeyStore.SecretKeyEntry).secretKey } return KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, KEY_STORE_PROVIDER) .apply { init( KeyGenParameterSpec.Builder( keyStoreAlias, KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT ).setBlockModes(KeyProperties.BLOCK_MODE_GCM) .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE) .setIsStrongBoxBacked(useStrongKeyBox) .setKeySize(256) .build() ) }.generateKey() } fun encrypt(inputByteArray: ByteArray): Result<String> { return runCatching { val secretKey = generateSecretKey().getOrThrow() cipher.init(Cipher.ENCRYPT_MODE, secretKey) val encryptedData = cipher.doFinal(inputByteArray) cipher.iv.joinToString("|") + ":iv:" + encryptedData.joinToString("|") } } fun decrypt(inputEncryptedString: String): Result<ByteArray> { return runCatching { val (ivString, encryptedString) = inputEncryptedString.split(":iv:", limit = 2) val iv = ivString.split("|").map { it.toByte() }.toByteArray() val encryptedData = encryptedString.split("|").map { it.toByte() }.toByteArray() val secretKey = generateSecretKey() val gcmParameterSpec = GCMParameterSpec(128, iv) cipher.init(Cipher.DECRYPT_MODE, secretKey, gcmParameterSpec) cipher.doFinal(encryptedData) } } } Sample Implementation of Encryption and Decryption Using Tink class TinkClient( context: Context ) { val keysetName = "key_set" val prefFileName = "pref_file" val packageName = context.packageName var aead: Aead init { AeadConfig.register() aead = buildAead(context) } private fun buildAead(context: Context): Aead { return AndroidKeysetManager.Builder() .withKeyTemplate(KeyTemplates.get("AES256_GCM")) .withSharedPref( context, "$packageName.$keysetName", "$packageName.$prefFileName" ) .withMasterKeyUri("android-keystore://tink_master_key") .build() .keysetHandle .getPrimitive(RegistryConfiguration.get(), Aead::class.java) } fun encrypt(inputByteArray: ByteArray): Result<String> { return runCatching { val encrypted = aead.encrypt(inputByteArray, null) Base64.getEncoder().encodeToString(encrypted) } } fun decrypt(inputEncryptedString: String): Result<ByteArray> { return runCatching { val encrypted = Base64.getDecoder().decode(inputEncryptedString) aead.decrypt(encrypted, null) } } } Performance Benchmarking of Encryption Libraries We measured the encryption processing time of Android KeyStore and Tink. For Android KeyStore, we evaluated two execution environments: StrongBox and TEE . In the test code, a common encryption algorithm (AES_GCM) was set and the process of repeatedly encrypting 10KB of data was measured using Microbenchmark . By using Microbenchmark, measurements were taken on an actual Google Pixel Tablet using a thread other than the UI thread. The test code is shown below: import androidx.benchmark.junit4.BenchmarkRule import androidx.benchmark.junit4.measureRepeated import androidx.test.ext.junit.runners.AndroidJUnit4 import androidx.test.platform.app.InstrumentationRegistry import org.junit.Rule import org.junit.Test import org.junit.runner.RunWith @RunWith(AndroidJUnit4::class) class ExampleBenchmark { @get:Rule val benchmarkRule = BenchmarkRule() @Test fun benchmarkTinkEncrypt() { val context = InstrumentationRegistry.getInstrumentation().context val client = TinkClient(context) val plainText = ByteArray(1024 * 10) benchmarkRule.measureRepeated { client.encrypt(plainText).getOrThrow() } } @Test fun benchmarkStrongBoxEncrypt() { val context = InstrumentationRegistry.getInstrumentation().context val client = AndroidKeyStoreClient(context, true) val plainText = ByteArray(1024 * 10) benchmarkRule.measureRepeated { client.encrypt(plainText).getOrThrow() } } @Test fun benchmarkTeeEncrypt() { val context = InstrumentationRegistry.getInstrumentation().context val client = AndroidKeyStoreClient(context, false) val plainText = ByteArray(1024 * 10) benchmarkRule.measureRepeated { client.encrypt(plainText).getOrThrow() } } } Here are the benchmark results: Encryption Backend Average Encryption Time (ms) Number of Allocations Android KeyStore (StrongBox) 209 4646 Android KeyStore (TEE) 7.07 4786 Tink 0.573 38 Compared to Tink, which performs encryption in software, both Android KeyStore (StrongBox) and Android KeyStore (TEE) take significantly longer to process due to hardware access. Although the device we used in this test is relatively high-spec for an Android device, using Android KeyStore—particularly StrongBox—may require careful consideration of the user experience (UX). Notes Incidentally, the actual execution environment used for key generation with Android KeyStore can be determined using the code below: val secretKey = generateSecretKey() val kf = SecretKeyFactory.getInstance(KeyProperties.KEY_ALGORITHM_AES, KEY_STORE_PROVIDER) val ki = kf.getKeySpec(secretKey, KeyInfo::class.java) as KeyInfo val securityLevelString = when (ki.securityLevel) { KeyProperties.SECURITY_LEVEL_STRONGBOX -> "STRONGBOX" KeyProperties.SECURITY_LEVEL_TRUSTED_ENVIRONMENT -> "TEE" KeyProperties.SECURITY_LEVEL_SOFTWARE -> "SOFTWARE" else -> "UNKNOWN" } Log.d("KeyStoreSecurityLevel", "Security Level: ${ki.securityLevel}") Conclusion Since EncryptedSharedPreferences has been deprecated, we evaluated technologies as potential replacements. Following the official recommendation, we adopted DataStore as our data persistence solution. For encryption, after comparing Android KeyStore and Tink, we found that Tink was easier to use as it abstracts the key generation and encryption processes. It also offered better performance and met our security requirements, making it our preferred choice. It's worth noting that using Android KeyStore requires accounting for device-specific behavior. As such, it's important to carefully weigh both performance and security needs when choosing an encryption approach.
アバター
This article is the entry for day 5 in the KINTO Technologies Advent Calendar 2024 🎅🎄 I'm Onuma, an Android engineer in the Mobile Development Group at KINTO Technologies. I primarily work on developing the My Route , a mobility service app. In this article, I'll guide you through the process of building Android Automotive OS and developing in-vehicle apps, including both Android Automotive and Android Auto applications. Running Android Automotive OS on a Raspberry Pi What is Android Automotive OS? Android Automotive is an in-vehicle platform built on Android and integrated into the AOSP framework. It supports pre-installed Android apps for IVI systems, as well as second- and third-party Android apps. For more details, refer to the official documentation →  https://developer.android.com/training/cars?hl=ja#automotive-os What is AOSP? AOSP stands for Android Open Source Project, and all the elements that make up the Android OS are open source and available to the public. Android Open Source Project The latest OS developed by Google is released as open source after a certain non-disclosure period. Device developers customize the released OS by adding features or making modifications to suit their specific needs before installing it on smartphones, tablets, and other devices. What to prepare to build Android Automotive OS PC *It is necessary to meet the hardware requirements mentioned later. Display *A touch monitor is better. RaspberryPi 4B MicroSD 16GB should be enough. MicroHDMI-HDMI cable Hardware Requirements for Building OS: Ubuntu 22.04 Intel Gold 6226R (16 cores, 32 threads) At least 16 GB of RAM HD: 1TB *Note: Building on Windows or Mac OS is not supported. I tried to create an environment on AWS EC2 to build, but I gave up since the Free Tier couldn't meet the required specifications. Set Up the Build Environment Install the necessary tools for building. sudo apt-get install git-core gnupg flex bison build-essential zip curl zlib1g-dev libc6-dev-i386 libncurses5 lib32ncurses5-dev x11proto-core-dev libx11-dev lib32z1-dev libgl1-mesa-dev libxml2-utils xsltproc unzip fontconfig Add Repo and Local_Manifest Android OS consists of a large collection of source code. Repo is used to check out Android source code. The components are loosely coupled, with each one being managed and developed in an independent Git repository. Repo is a tool that manages these numerous Git repositories based on a management file called a Manifest file. ## Install the Repo launcher repo init -u [https://android.googlesource.​com/platform/manifest](https://android.googlesource.com/platform/manifest) -b android-13.0.0\_r35 --depth=1 ## Add the local_manifest git clone [https://github.com/grapeup/​aaos_local_manifest.git](https://github.com/grapeup/aaos_local_manifest.git) .repo/local\_manifests Add dav1d under line 46 <!-- FFmpeg --> of .repo/local_manifests/manifest_brcm_rpi4.xml Added missing dav1d library in the local manifest by jijith700 · Pull Request #5 · grapeup/aaos_local_manifest · GitHub ## Add missing dav1d library to local_manifest <!-- FFmpeg --> <project path="external/dav1d" name="raspberry-vanilla/android_external_dav1d" remote="github" revision="android-13.0" /> Compile . build/envsetup.sh lunch aosp_rpi4-userdebug make bootimage systemimage vendorimage -j$(nproc) Flashing and Deploying Images Clean the MicroSD card. sudo umount /dev/sdb* sudo wipefs -a /dev/sdb* sudo wipefs -a /dev/sdb Then create four partition tables and flash the images to the MicroSD card. There are three images to flash onto the microSD card: boot.img , system.img , and vendor.img . As I thought I could flash the images using the command sudo dd if=boot.img of=/dev/sdb1 bs=1M , I gave it a try, but the steps were too complicated. So, I used a partition editing tool called GParted . Boot Android Automotive OS Insert the microSD card into the Raspberry Pi to boot. It's convenient if you connect a touch monitor to the Raspberry Pi, you can operate it without a mouse. But I don't have a touch monitor, so I'm connecting a PC monitor instead ;-; Develop In-Vehicle Apps that Run on Android Auto and Android Automotive OS Next, I'll walk you through the basics of implementing and debugging in-vehicle apps on Android. While Android Auto works by connecting your smartphone to display apps on the car's screen, Android Automotive OS has Android built directly into the vehicle system, allowing apps to be installed directly. This time, I implemented a navigation app as a trial. The following development environment is a Mac. Supported app categories and corresponding Android APIs Category Description Corresponding Android API Media Apps for music, podcasts and audiobooks Use the MediaBrowserService to browse content and control playback. Use the MediaSession to notify the system of playback status and metadata. Navigation Turn-by-turn navigation with audio and visual guidance Use the NavigationManager from the CarAppLibrary to control navigation start, end, destination setting, and turn-by-turn guidance. Point of Interest (POI) Apps to find locations such as parking lots, EV charging spots, gas stations, etc. Use the PlaceClient to implement features such as finding locations, getting more information, and Place Autocomplete. Use the PlaceListMapTemplate from the CarAppLibrary to view POIs on a map. Messaging ( Android Auto only) Hands-free message replies with voice input Use the MessagingManager from the CarAppLibrary to control sending and receiving messages, voice input, and sending template messages. Game Apps for entertainment while parked Use the ScreenManager from the CarAppLibrary to view the game screen while parked. Use the InputManager to receive control inputs for your game. Browser & Video Browser integration and video playback features (specific to AAOS, often used while parked) Use the WebTemplate from the CarAppLibrary to display web content. Use the VideoTemplate to play video content. These templates are recommended for use only while parked. Supplement I've summarized the key points from the table in the Official Documentation . Since new categories are added every year, even if you can't widely release your app yet, there is a possibility it could be released in the future. CarAppLibrary is a Jetpack library for Android Auto and Android Automotive OS app development. PlaceClient is a client that uses the Google Places API. Desktop Head Unit (DHU) What is a DHU? DHU is a tool for emulating the Android Auto environment on a desktop. It allows you to simulate the in-vehicle experience without using an actual in-vehicle device. Why use DHU? You can test how the app operates and displays in an in-vehicle environment. It lets you debug and verify that your UI/UX complies with guidelines to avoid distracting drivers. Run the DHU The following are required to run the DHU: MacBook Android device Install the Android Auto Desktop Head Unit Emulator in the SDK Manager. Make sure there is a desktop-head-unit in Library/Android/sdk/extras/google/auto. Grant permissions to the desktop-head-unit. chmod +x ./desktop-head-unit Forward the socket connection to the same port number on the Android device. adb forward tcp:5277 tcp:5277 Open the Auto settings on your Android device. Tap [See all apps] > [Android Auto] > [Advanced] > [Additional settings in the app]. Tap the version and permission information about 10 times to enable developer mode. Run the DHU. ./desktop-head-unit --usb About Host When running an app created for Android Auto or Android Automotive in a compatible car, the app doesn't interact directly with the car. At this stage, the connection destination is the Android Auto app on the Android device. During the DHU installation process, connecting to the actual device via USB is necessary since it needs to communicate with the Android Auto app, which serves as the host. The Android Auto app is referred to as the host, and all Auto-compatible apps interact with this host. In the case of an Android Automotive-compatible car, the OS is built into the vehicle system itself, so Android Automotive acts as the host. Libraries CarAppLibrary is a Jetpack library for Android Auto and Android Automotive OS app development. Apps built using the CarAppLibrary run through the host app rather than running directly on Auto or Automotive. Declare the version of CarAppLibrary in the project-level build.gradle. buildscript { ext { car_app_library_version = '1.4.0' } } dependencies { ... implementation "androidx.car.app:app:$car_app_library_version" ... } Add services and sessions Add a class that inherits from CarAppService. You need to extend the CarAppService bound by the host. In the intent filter, you need to declare androidx.car.app.category.POI as the category for the car app. <service android:name="com.example.places.carappservice.PlacesCarAppService" android:exported="true"> <intent-filter> <action android:name="androidx.car.app.CarAppService" /> <category android:name="androidx.car.app.category.POI" /> </intent-filter> </service> The CarAppService abstract class cannot be overridden, such as onBind or onUnbind . The library takes care of proper interaction with the host app for you. You only need to implement createHostValidator and onCreateSession . The HostValidator returned by createHostValidator is referenced when the CarAppService is bound to verify that the host is trusted, and that the bind fails if it does not match the parameters defined by the host. ALLOW_ALL_HOSTS_VALIDATOR is a HostValidator that can only be used for validation. class PlacesCarAppService : CarAppService() { override fun createHostValidator(): HostValidator { return HostValidator.ALLOW_ALL_HOSTS_VALIDATOR } override fun onCreateSession(): Session { return PlacesSession() } } Add the PlacesSession class. class PlacesSession : Session() { override fun onCreateScreen(intent: Intent): Screen { return MainScreen(carContext) } } Template You need to choose from predefined templates and implement them according to the guidelines. The UI and UX of car apps are restricted because the UI needs to be optimal for drivers. Reference: Official template documentation In addition, add the necessary permissions to access templates for displaying maps. <uses-permission android:name="androidx.car.app.MAP_TEMPLATES" /> List Location Information Once launched, locations will be listed. The UI can be implemented using Composable. Add a MainScreen that inherits Screen from CarAppLibrary. To display the list of locations and a map, return a PlaceListMapTemplate using onGetTemplate . Templates are implemented using the Builder Design Pattern. Pass the items to be listed using setItemList and build the template to return. Use ItemListBuilder to build the items to be listed. class MainScreen( carContext: CarContext, ) : Screen(carContext) { override fun onGetTemplate(): Template { val placesRepository = PlacesRepository() val itemListBuilder = ItemList.Builder() .setNoItemsMessage("No data") placesRepository.getPlaces() .forEach { itemListBuilder.addItem( Row.Builder() .setTitle(it.name) // Each item in the list must have a DistanceSpan added to the title or text line. .addText( SpannableString(" ").apply { setSpan( DistanceSpan.create( Distance.create(Math.random() * 100, Distance.UNIT_KILOMETERS), ), 0, 1, Spannable.SPAN_INCLUSIVE_INCLUSIVE, ) }, ) .setOnClickListener { screenManager.push(DetailScreen(carContext = carContext, placeId = it.id)) } .setMetadata( Metadata.Builder() .setPlace( Place.Builder(CarLocation.create(it.latitude, it.longitude)) .setMarker(PlaceMarker.Builder().build()) .build(), ) .build(), ).build(), ) } return PlaceListMapTemplate.Builder() .setTitle("Places") .setItemList(itemListBuilder.build()) .build() } } View detailed location information Use PaneTemplate to implement the detail screen. class DetailScreen(carContext: CarContext, private val placeId: Int) : Screen(carContext) { private var isFavorite = false override fun onGetTemplate(): Template { val place = PlacesRepository().getPlace(placeId) ?: return MessageTemplate.Builder("Place not found") .setHeaderAction(Action.BACK) .build() val navigateAction = Action.Builder() .setTitle("Navigate") .setIcon( CarIcon.Builder( IconCompat.createWithResource( carContext, R.drawable.baseline_navigation_24 ) ).build() ) .setOnClickListener { carContext.startCarApp(place.toIntent(CarContext.ACTION_NAVIGATE)) } .build() val actionStrip = ActionStrip.Builder() .addAction( Action.Builder() .setIcon( CarIcon.Builder( IconCompat.createWithResource( carContext, R.drawable.baseline_favorite_24 ) ).setTint( if (isFavorite) CarColor.RED else CarColor.createCustom( Color.LTGRAY, Color.DKGRAY ) ).build() ) .setOnClickListener { isFavorite = !isFavorite // To capture updates to the screen state, call invalidate() to call `onGetTemplate` again. invalidate() }.build() ) .build() return PaneTemplate.Builder( Pane.Builder() .addAction(navigateAction) .addRow( Row.Builder() .setTitle("Coordinates") .addText("${place.latitude}, ${place.longitude}") .build() ).addRow( Row.Builder() .setTitle("Description") .addText(place.description) .build() ).build() ) .setTitle(place.name) .setHeaderAction(Action.BACK) .setActionStrip(actionStrip) .build() } } Launch the app Possible Errors When Trying to Launch Other Apps Caused by: androidx.car.app.HostException: Remote startCarApp call failed An error may occur when attempting to start navigation (where startCarApp is called). This is likely because a navigation app is not installed. You can easily find a navigation app by searching in the Play Store on the emulator. Vehicle Properties Available in the App Although not yet verified, the following properties should be available. The setting values may be adjustable in the emulator. Reference Vehicle Speed The current speed of the vehicle can be obtained. It is typically provided in km/h and is used for actions based on speed limits and driver assistance features. Fuel Level For gasoline vehicles, you can obtain the remaining fuel level in the tank. This can be used for features like "low fuel" warnings or suggestions for the nearest fuel station. Battery Level For EVs and Hybrids, you can monitor the state of the vehicle's battery. It is used to display charging status or remaining battery levels. Door Status The open/closed status of each door (front, rear, trunk, and hood) can be obtained. You can set up notifications when a door is left open or alerts to prevent forgetting to close it. Light Status The on/off status of the vehicle lights (headlights, high beams, fog lights, etc.) can be obtained. This allows for night mode switching and providing feedback to the driver. Engine Status The on/off/idling status of the engine can be obtained. The application can restrict certain actions when the engine is off. Parking Brake Status The status of whether the parking brake is applied or released can be obtained. This can be used to control app functionality and interactions while parked. Gear Position The position of the shift lever (Park, Reverse, Neutral, Drive, etc.) can be obtained. This allows for automatic activation of the back camera and interface changes based on the gear selection. Tire Pressure The tire information such as tire pressure can be obtained. This allows notification of low pressure warnings and maintenance alerts. External Temperature The external temperature can be obtained, allowing it to be used for weather-based interfaces or driver notifications based on driving conditions. Seat Occupancy Status The presence of passengers in each seat and seatbelt usage can be obtained. This is used to display warnings when seat belts are not fastened for safety reasons. Window Status The open/closed state of each window can be obtained. For example, a notification can be sent if a window is left open when the vehicle is turned off. HVAC Status The settings and status of the vehicle's air-conditioning system (heating, cooling, fan speed, and airflow direction) can be obtained. This allows the app to manage a comfortable in-car environment. GPS Location The vehicle's current GPS location can be obtained. This enables navigation apps and location-based services. Wiper Status The operational state of the wipers can be obtained. This helps adjust the UI based on weather and visibility conditions. Conclusion Thank you for reading to the end. Android Automotive OS The quality of the Android open source is well-maintained, making it easy enough for a beginner to clone, build, and boot it. However, the required PC specs are quite high. One of our engineers pointed out that boards with Android Automotive OS pre-installed are available worldwide, and I couldn't help but think, "Why didn’t you tell me sooner?" Nonetheless, getting the OS up and running was a highly rewarding experience. Auto and Automotive App Development This article ended up providing a broad overview of in-vehicle app development, but we discovered that the implementation process involves just a few steps. That said, the concepts of host app and emulator setup can be somewhat challenging to grasp. Since automotive app development doesn't allow for much UI customization, the real challenge and fun lie in refining what the app can do. In the future, as autonomous driving becomes mainstream, more categories may emerge, allowing drivers to enjoy gaming and other experiences. Bonus Reflecting on my childhood memories of driving, I was inspired by Hyuga-san's article to create AI-generated music. Here's what I came up with. It turned out pretty good and atmospheric. I suppose only my colleagues would stick around this long. Looking forward to hearing your thoughts! https://soundcloud.com/numami-775711983/5qclozsqk1mz
アバター
Continuous Delivery of Kubernetes Applications Using Only GitHub Actions Hello. My name is Narazaki, and I work in the Toyota Woven City Payment Solution Development Group. Our team is responsible for developing the payment infrastructure application for Woven by Toyota at Toyota Woven City . We build cross-functional payment solutions, covering everything from the backend to the web front end and mobile applications. The payment backend runs on Kubernetes and is developed using various cloud-native tools. This time, while following GitOps—an approach where infrastructure configuration files are managed and modified using Git, key to building and maintaining stable Kubernetes applications—we aim to implement the continuous delivery (CD) process using only GitHub Actions, instead of the commonly used cloud-native CD tools. The CD process in this setup is limited to: Applying changes to Kubernetes configuration files Updating the container image While there are more advanced CD strategies like Blue/Green and Canary deployments, this approach starts small. This setup is designed for teams that already have a DevOps workflow and want to continuously and efficiently deliver Kubernetes applications with minimal developers and no additional tools—using only GitHub Actions, which they already use daily. The repository assumes that both the application code and Kubernetes configuration management files are maintained in the same repository. (Technically, it might be possible to run this across repositories depending on permission settings, but let’s not get into that here.) For GitLab users, there’s an excellent tool called Auto DevOps , so this isn’t a 'GitHub and GitHub Actions are the best!' kind of post. But don’t worry, I’m not making that claim! Cloud-Native CI/CD Tools for Kubernetes What tools come to mind when you think of CI/CD for Kubernetes applications? Argo CD Flux CD PipeCD Tekton And so on. Both tools are powerful and highly useful for leveraging Kubernetes to its full potential. They also allow for flexible and secure updates to Kubernetes configuration files and application images, enabling GitOps practices. On the other hand, they require tool-specific knowledge and expertise. For smaller teams without dedicated DevOps specialists, maintaining them continuously can be a challenge—wouldn’t you agree? Running a CD tool itself requires Kubernetes, and the tool also needs Kubernetes configuration files to manage those same configuration files. In this article, we‘ll explore how to set up the pipeline shown in the figure below using only GitHub Actions. Kubernetes runs on a generic cluster, not tied to any specific cloud provider. This setup requires a container registry. The configuration management file uses Kustomize as an example, but it can be applied to other tools like Helm, Terraform, and more. flowchart TD A[Change Code] -->| Run Build Pipeline | B[Build and Push Container Image] B -->| Trigger pipeline to update container images | C[Create pull request with updated container image references] C -->| Review pull request | D[Deploy new container image to Kubernetes] linkStyle default stroke-width:2px,color:blue,stroke-dasharray:0 Demo Consider a repository that includes folders for both Kubernetes configuration files and applications. The folder structure is as follows: This section omits specific code, Dockerfile contents, and application source code. ├── .github │   ├── actions │   │   └── image-tag-update │   │   └── action.yaml │   └── workflows │   ├── build-go.yaml │   ├── build-java.yaml │   ├── build-node.yaml │   └── kubectl.yaml ├── go-app │   ├── src/ │   └── Dockerfile ├── java-app │   ├── src/ │   └── Dockerfile ├── k8s │   ├── service-go.yaml │   ├── service-java.yaml │   ├── service-node.yaml │   └── kustomization.yaml └── node-app    ├── src/ └── Dockerfile Each application follows the structure below: apiVersion: apps/v1 kind: Deployment metadata: name: app spec: ... template: ... spec: containers: - name: some-server image: Go-placeholder # put the same string as in kcustomization as a placeholder All placeholders are centrally managed in kustomization.yaml: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: techblog resources: - service-go.yaml - service-java.yaml - service-node.yaml images: - name: go-placeholder newName: go-app newTag: v1.1.1 - name: java-placeholder newName: java-app newTag: v2.7.9alpha - name: node-placeholder newName: node-app newTag: latest First, to apply the Kubernetes configuration file, configure the following GitHub Actions workflow. name: kubectl on: pull_request: branches: - "**" paths: - "K8s/**" the location of the #Kubernetes manifest file push: branches: - main paths: - "k8s/**" jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: azure/setup-kubectl@v4 - env: KUBECONFIG_CONTENTS: ${ secrets.KUBECONFIG_contents }} # Put kubeconfig in GitHub secrets beforehand run: | echo "${KUBECONFIG_CONTENTS}" > $HOME/.kube/config chmod 600 $HOME/.kube/config - run: kubectl apply --dry-run=server -k ./k8s >> $GITHUB_STEP_SUMMARY - if: Github.ref == 'refs/heads/main # Changes are actually applied only on the main branch run: kubectl apply -k k8s/ This pipeline applies a standard Kubernetes configuration when using a kubeconfig with administrator privileges. Adjust the method of retrieving the kubeconfig based on the cluster’s configuration approach, such as for different cloud environments. Next, automatically create a pull request when pushing an application, or set up a composite action to update the container’s image tags. name: image-tag-update description: 'Task to update image tags in kustomization when container images are updated. inputs: target_app: description: 'Target applications, required: true tag_value: description: 'New Container Image Tag ' required: true token: description: 'Tokens with PR and content update privileges. required: true runs: using: 'composite' steps: - uses: actions/checkout@v4 id: check-branch-exists continue-on-error: true with: ref: "Image-tag-update" # Default branch name for tag updates - uses: Actions/checkout@v4 # Checkout cannot fall back to the default branch if the specified branch is missing if: steps.check-branch-exists.outcome == 'failure' with: ref: main - uses: Mikefarah/yq@master # Replace the value of the target placeholder tag with yq with: cmd: yq eval '(.images[] | select(.name == "'"${{ inputs.target_app }}-placeholder"'")).newTag = "'"${{ inputs.tag_value }}"'"' -i k8s/kustomization.yaml - uses: peter-evans/create-pull-request@v6 if: steps.check-branch-exists.outcome == 'failure' # Create a new pull request if no pull requests exist with: title: 'Update Container Image. body: | Update `${{ inputs.target_app }}' branch: "image-tag-update" - uses: stefanzweifel/git-auto-commit-action@v5 if: steps.check-branch-exists.outcome == 'success' # Add a commit to the existing branch if checkout succeeds with: commit_message: "Image update for ${{ inputs.target_app }}" The composite runs during image creation for each application. For multiple applications, it’s a good idea to add them after each image is created. ... - uses: docker/setup-buildx-action@v3 - uses: docker/build-push-action@v6 with: file: ./Dockerfile push: true tags: ${ env.tag }} # some tag - uses: ./.github/actions/image_update if: github.ref == 'refs/heads/main' with: target_app: go tag_value: ${{ env.tag }} token: ${ secrets.GITHUB_token }} # A github token with content and Pull Request editing privileges When the application runs,the container image updates automatically, letting you deploy a new one with a Pull Request! (Tag derivation is handled by your workflow.) The example below shows the minor version increment.) - name: go-placeholder newName: go-app - newTag: v1.1.1 + newTag: v1.1.2 Operational considerations Timing of deployment Image update Pull Request deploy immediately upon merging. If you want to release updates along with infrastructure changes, you can either add the fixes to the same branch or merge them when the timing is right. Add a new container application For example, if you add a Python application in the above setup while an image update Pull Request is still open, updating the Python image tag won’t take effect unless the Pull Request includes the latest changes. Cut back It’s easy to undo—just revert the commit. Timing of Reconcile While many GitOps Tools offer near real-time reconciliation to minimize drift, this method only works when the CD pipeline is running. It’s important to choose the right tool based on the number of teammates and their permissions to update the Kubernetes cluster. You’re interacting with the Container Registry indirectly. While some retrieve the latest container image directly from the container registry, this approach operates differently. It's advisable to include a verification step for each container registry to ensure the container exists. About permission settings for GitHub Actions You’ll need update permissions for contents and pull-requests . Set permissions in Actions settings, GitHub Apps, and more. Learn more here . Overwritten by a container image that was executed later The CD tool determines the newer version by checking the container image tag, following conventions like Semantic Versioning. The workflow above will overwrite the image tag in the later-executed pipeline, regardless of the tag’s value. If this behavior is an issue, consider checking the value before deciding whether to overwrite it. Summary With this approach, GitOps can be fully managed on GitHub, enabling a simple and efficient continuous delivery process for Kubernetes applications. Since CD tool errors can also be consolidated in GitHub Actions, it’s convenient to check execution results and errors just like in the usual CI process. Kubernetes offers a wide range of tools, making selection challenging. However, by choosing the right tools for my workflow, I aim to improve the productivity of Kubernetes application development.
アバター
KINTO ID Platformチームの Xu Huang です。数年前から複数の国にユーザー認証認可システム(略称UserPool)を提供し、Brokerモデルを採用して複数地域のUserpoolを繋ぎ、お互いに認証認可情報を共有できるアーキテクチャを構築して運用していました。昨年からコスト削減活動の一環としてアーキテクチャの設計を見直し、移行を行いましたので、その変更内容について紹介したいと思います。 最初はGlobal展開の戦略でAWS Aurora Global Database(略称Global DB)採用し、アクセス負荷とレイテンシーを減らすためにSlave DBをUserpoolに近いところに配置してBrokerサーバーもSlave DBと同じリージョンに配置した運用にしていました。 (Global DB制約上MasterDB一つだけ、Slave DB最大五つまで許容) 上記の図に青枠に囲んだ地域に順次Userpoolサービスを提供し、ユーザーを一元化管理するために必要なユニークIDは集約したところから発行し、各リージョンのサブDBに同期して管理する方針で設計しました Phase 1 Global DB → Normal DBに変更し、書込み専用アプリ廃棄 アクセス負荷を重要視した設計で複数のリージョンにサーバー配置しましたが実際運用上はまだスケールアップが必要なレベルまで至っておらずコストが余計に掛かっている状態でした。 適切な構成にする為に、検証評価したところGlobal DBの必要性がないと判断した為、直接Brokerから共通DBにR&W可能の設計に変更しました。 変更後のアーキテクチャの以下の図のイメージになります Phase 2 Broker一本化 Phase 1 の対応でかなりコスト削減できましたがさらにコスト下げる検討し続け、Brokerが一つに集約できないかを検討し始めました。但し集約するには一つ課題あって、IDプロバイダーとしては外部サードパーティにもリダイレクトURL提示していてそれらが変更してしまうとサードパーティ側も合わせて変更作業発生するのでドメインを変更しない前提で移行できないか考えてみました。インフラチームにも協力してもらい、Route53にDNS設定を変更して向き先を新しい統合サーバーと繋ぐCloudFrontにスイッチしておけばドメインを変えなくてもいいと考えていました。 上記の図のような設計に変更すると、物理的にサーバー間通信の距離が離れているためUserpoolから集約したBrokerの通信にはレイテンシーどれぐらい影響あるかも気になって測ってみました。 結果はUserPoolからBrokerの通信は約10%遅くなりましたが、BrokerがDBと同じリージョンに配置したため速くなり、アーキテクチャ変更前後の全体から見るとあんまり変わらないため、Phase2の移行計画も立て進めてきました。 成果: 上記2段階でビジネスの実態に即して構成の最適化を行いました 今後は機能的なところも見直し作業継続して定期的にコスト削減の活動を行なって参ります。
アバター
This article is part of day 6 of KINTO Technologies Advent Calendar 2024 . 🎅🎄 Introduction Merry Christmas 🔔. I am Romie ( @Romie_ktc ) from Osaka Tech Lab, where I work on Android-side development for the my route app in the Mobile App Development Group. In the my route Android team, we are currently switching the UI implementation from XML over to Jetpack Compose (hereinafter, Compose). However, since we cannot refactor everything all in one go, there will inevitably be situations where parts converted to Compose will be layered on top of XML. In this article, how we implemented it by overlaying Compose on top of the BottomSheet XML. What the finished result looks like :::message alert This article is about implementing things while in the process of refactoring. Implementation everything entirely in either XML or Compose is recommended. ::: Basics The implementation is done with the following classes, which inherit BottomSheetDialogFragment. class MixComposedBottomSheetDialog : BottomSheetDialogFragment() Set the Basic Behavior of the BottomSheet. Here, we set the behavior of the BottomSheet. The following code is put in onCreateView. dialog?.setOnShowListener { dialogInterface -> val bottomSheetDialog = dialogInterface as BottomSheetDialog val bottomSheet = bottomSheetDialog.findViewById<View>(com.google.android.material.R.id.design_bottom_sheet) // If you want to set rounded corners and a background color, follow the steps below context?.let { bottomSheet?.background = ContextCompat.getDrawable(it, R.drawable.background_map_bottom_sheet) } val bottomSheetBehavior = bottomSheet?.let { BottomSheetBehavior.from(it) } bottomSheetBehavior?.let { behavior -> // Set maxHeight and peekHeight to whatever heights you want. behavior.maxHeight = EXPANDED_HEIGHT // Set the height for when the BottomSheet is expanded as far as it will go behavior.peekHeight = COLLAPSED_HEIGHT // Set the height for when the BottomSheet is displayed in its initial state behavior.isHideable = false behavior.isDraggable = true } } Compose By returning ComposeView via onCreateView, you can put Compose on top of BottomSheetDialogFragment. return ComposeView(requireContext()).apply { setViewCompositionStrategy(ViewCompositionStrategy.DisposeOnViewTreeLifecycleDestroyed) setContent { BottomSheetContents() } } If this is all you want to do, then it is easy enough to understand. However, if you want to add a button that will always be at the bottom of the bottom sheet, things get trickier. Developing Things Further The button itself is implemented with Compose. However, adding a button in this way means it will not be displayed unless you scroll through the contents. return ComposeView(requireContext()).apply { setViewCompositionStrategy(ViewCompositionStrategy.DisposeOnViewTreeLifecycleDestroyed) setContent { BottomSheetContents() ButtonOnBottomSheet() } } To ensure that the button will always be anchored to the bottom of the bottom sheet and will not get pulled about even by scrolling through the contents, you need to implement something like the following: val button = ComposeView(context ?: return@setOnShowListener).apply { setViewCompositionStrategy(ViewCompositionStrategy.DisposeOnViewTreeLifecycleDestroyed) setContent { ButtonOnBottomSheet() } } To anchor the button to the bottom of the bottom sheet, use the following code. Using this code enables you to directly retrieve the layout implemented with BottomSheetDialogFragment. Consequently, it enables you to manipulate views more flexibly. val containerLayout = dialogInterface.findViewById<FrameLayout>(com.google.android.material.R.id.container) val coordinatorLayout = dialogInterface.findViewById<CoordinatorLayout>(com.google.android.material.R.id.coordinator) clipChildren is a property of ViewGroup that specifies whether to clip the drawing of a child view if it will be drawn outside the boundary of the parent view. It will be used when something overlaps other elements of the bottom sheet. // The default value is true, and setting it to false lets you display child views as is even if they go outside the boundary of the parent. button.clipChildren = false button.layoutParams = FrameLayout.LayoutParams( FrameLayout.LayoutParams.MATCH_PARENT, FrameLayout.LayoutParams.WRAP_CONTENT, ).apply { gravity = Gravity.BOTTOM } containerLayout?.addView(button) button.post { val layoutParams = coordinatorLayout?.layoutParams as? ViewGroup.MarginLayoutParams layoutParams?.apply { button.measure( View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED), View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED), ) this.bottomMargin = button.measuredHeight containerLayout?.requestLayout() } } Summary Summarizing the implementation so far, we have the following: override fun onCreateView(): View { dialog?.setOnShowListener { dialogInterface -> val bottomSheetDialog = dialogInterface as BottomSheetDialog val containerLayout = dialogInterface.findViewById<FrameLayout>(com.google.android.material.R.id.container) val coordinatorLayout = dialogInterface.findViewById<CoordinatorLayout>(com.google.android.material.R.id.coordinator) val bottomSheet = bottomSheetDialog.findViewById<View>(com.google.android.material.R.id.design_bottom_sheet) context?.let { bottomSheet?.background = ContextCompat.getDrawable(it, R.drawable.background_map_bottom_sheet) } val button = ComposeView(context ?: return@setOnShowListener).apply { setViewCompositionStrategy(ViewCompositionStrategy.DisposeOnViewTreeLifecycleDestroyed) setContent { ButtonOnBottomSheet() } } button.clipChildren = false button.layoutParams = FrameLayout.LayoutParams( FrameLayout.LayoutParams.MATCH_PARENT, FrameLayout.LayoutParams.WRAP_CONTENT, ).apply { gravity = Gravity.BOTTOM } containerLayout?.addView(button) button.post { val layoutParams = coordinatorLayout?.layoutParams as? ViewGroup.MarginLayoutParams layoutParams?.apply { button.measure( View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED), View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED), ) this.bottomMargin = button.measuredHeight containerLayout?.requestLayout() val bottomSheetBehavior = bottomSheet?.let { BottomSheetBehavior.from(it) } bottomSheetBehavior?.let { behavior -> // Set maxHeight and peekHeight to whatever heights you want. behavior.maxHeight = EXPANDED_HEIGHT // Set the height for when the BottomSheet is expanded as far as it will go behavior.peekHeight = COLLAPSED_HEIGHT // Set the height for when the bottom sheet is displayed in its initial state behavior.isHideable = false behavior.isDraggable = true } } } } return ComposeView(requireContext()).apply { setViewCompositionStrategy(ViewCompositionStrategy.DisposeOnViewTreeLifecycleDestroyed) setContent { BottomSheetContents() } } } In other words, here's how we implemented it: Create a BottomSheet in xml. Adjust the layout (i.e., the heights and so on). In order to attach the button to the bottom of the BottomSheet, prepare some more layout details and combine everything together. It means that we overlaid the Compose content onto the BottomSheet. Thank you for reading all the way to the end!
アバター
こんにちは! KINTOテクノロジーズ(以下、KTC)の生成AI活用PJTで生成AIエバンジェリストをしている和田( @cognac_n )です。 Azure AI Foundryを用いてDeepSeek R1を試してみた 1. はじめに 最近、大規模言語モデル(LLM)の進化がめざましく、さまざまな企業が独自のLLMを発表しています。その中でも注目を集めているのが、中国のAI企業DeepSeekが開発した DeepSeek R1 です。 2025年1月30日、MicrosoftはDeepSeek R1モデルを Azure AI Foundry とGitHubで提供開始しました。そこで本記事では、Azure AI Foundryを使ってDeepSeek R1を実際に試してみた際の流れや感想をまとめてみます。 2. Azure AI Foundryとは Azure AI Foundry は、Microsoftが提供する 包括的なAI開発プラットフォーム です。開発者が生成AIソリューションやカスタムコパイロットを 構築・評価・展開 するためのツールとサービスを一元的に提供してくれます。 主な特徴 多様なAIモデルの活用: MicrosoftやOpenAI、Hugging Face、Meta、Mistralなど、パートナー企業から提供される幅広い最先端モデル・オープンソースモデルをサポート。今回のDeepSeek R1もこの一環として提供されました。 統合されたAIツールチェーン: 開発ライフサイクル全体を加速させるためのSDKやAPI、ポータルが用意されており、データ前処理からモデル推論、継続的モニタリングまで一貫した体験が得られます。 責任あるAIの実践: 評価機能や安全フィルター、セキュリティ制御を組み込み、AIの信頼性と透明性を高める仕組みを提供します。 エンタープライズ向けのスケーラビリティ: Azureのマネージドインフラ上で高い可用性と拡張性が確保されており、企業レベルのモニタリング・ガバナンスをサポートします。 @ card Azure AI Foundry ポータルでモデル カタログを通じてモデルを使用する場合、プロンプトや出力がMicrosoftやモデル プロバイダーと共有されることはないと記載されており、様々なモデルをセキュアに利用することが可能です。 Microsoft では、ホスティング インフラストラクチャと API エンドポイントを提供および管理します。 この "サービスとしてのモデル" (MaaS) シナリオでホストされるモデルは、Azure のデータ、プライバシー、セキュリティのコミットメントの対象です。 Azure AI Foundry に適用できる Azure コンプライアンス認証の詳細を確認してください。 Microsoft は、従量課金制推論用にデプロイされたモデル (MaaS) により送信および生成されるプロンプトと出力のデータ プロセッサとして機能します。 Microsoft では、これらのプロンプトと出力をモデル プロバイダーと共有しません。 また、Microsoft は、Microsoft、モデル プロバイダー、またはサード パーティのモデルをトレーニングまたは改善するために、これらのプロンプトと出力を使用しません。 @ card 3. Azure AI Foundry上でのDeepSeek R1環境構築 ここからは、実際にDeepSeek R1をAzure AI Foundry上で利用するための手順を紹介します。なお、細かい画面遷移や設定項目については現時点(2025/1/31)の仕様となります。 UIや操作の変更が頻繁に行われるため、可能な限り公式ドキュメントを引用する形とします。 前提準備・アカウント設定 Azureアカウントの用意 Azure AI Foundryへのアクセス @ card DeepSeek R1の導入手順 プロジェクトの作成: Azure AI Foundryポータル上で新規プロジェクトを作成します。DeepSeek-R1が使用可能なリージョンは[eastus2, westus3, northcentralus, eastus, southcentralus, westus]です。(2025/1/31時点) @ card DeepSeek R1モデルの選択: 「モデルカタログ」からDeepSeek R1を探し、モデルページに遷移します。 デプロイ名の設定と価格の確認: 任意のデプロイ名を設定します コンテンツフィルターのON/OFFを設定します 2025/1/31時点で、DeepSeek-R1は無料提供されています。料金は今後変更されることが予想されるため、最新の情報を逐次チェックしましょう。 DeepSeek R1 use is currently priced at $0, and use is subject to rate limits which may change at any time. Pricing may change, and your continued use will be subject to the new price. The model is in preview; a new deployment may be required for continued use. 4. デプロイとエンドポイント構築: Azure AI Foundryのワークスペース内で自動的にエンドポイントが生成され、モデルを呼び出すURIやAPIキーが発行されます。 4. DeepSeek R1を使った簡単なデモ ここでは、実際にDeepSeek R1の推論を試すためのコード例を紹介します。 必要なライブラリをインストールします。 pip install azure-ai-inference APIを利用するコードの例を以下に記載します。 Azure AI Foundryではコードサンプルも提供されるため、トライが非常に簡単です。 from azure.ai.inference import ChatCompletionsClient from azure.ai.inference.models import SystemMessage, UserMessage from azure.core.credentials import AzureKeyCredential client = ChatCompletionsClient( endpoint="<YOUR_TARGET_URI>", credential=AzureKeyCredential("<YOUR_KEY>"), ) deploy_name = "<YOUR_DEPLOY_NAME>" response = client.complete( messages=[ SystemMessage(content="You are a helpful assistant."), UserMessage( content="企業における生成AI利活用を推進する方法を考えてください。日本語で回答してください。" ), ], max_tokens=2048, model=deploy_name, ) print(response.choices[0].message.content) 実際の応答例 <think> まず、問題は何かを理解する必要があるね。企業が生成AIを使う推進方法について考えなきゃいけないんだ。 生成AIといってもいろんな用途があるから、まずは企業内での現状を把握しなきゃ。どの部門がAIを活用できそうか、どのような課題があるのかを調べる必要があるだろう。例えば、マーケティングや顧客サポート、業務自動化などが考えられるね。 次に、教育や研修が大事だと思う。従業員がAIの使い方を学べる機会を提供しないと、実際に活用するのは難しい。特に、AIに対する不安や理解不足を解消するため、ワークショップやトレーニングプログラムを実施した方がいいかも。 また、適切なツールの選定も必要。自社のニーズに合った生成AIのプラットフォームやツールを選ぶことが重要。場合によっては、カスタマイズが必要かもしれないし、セキュリティやデータプライバシーの問題にも対応しなきゃいけない。 管理層のサポートも不可欠だよね。経営陣が推進しないと予算やリソースが確保できない。AI導入のメリットを明確に伝えて、経営層の理解を得る必要がある。 さらに、成功事例を作ることが効果的。一部の部門で試験的に導入して成果を出せば、他の部門にも広がりやすい。結果を共有して、社内全体に広げていく戦略がいいかもしれない。 あと、倫理的なガイドラインやポリシーを整備することも重要。AIの誤用を防ぎ、責任ある活用を促すため、明確なルールを作る必要があるだろう。 継続的な評価と改善も忘れちゃいけない。導入後も効果を測定し、フィードバックを元に改善を重ねることで、持続的な推進が可能になる。 そういったポイントを整理して、具体的なステップを考えていく必要がありそうだな。 </think> 企業における生成AIの利活用を推進するためには、以下のステップを体系的に進めることが効果的です。 --- ### **1. 現状分析と目標設定** - **ニーズの特定**: 各部門(企画、営業、開発、サポートなど)の業務課題を洗い出し、生成AIが解決できる領域(ドキュメント作成、データ分析、顧客対応等)を明確化します。 - **優先順位付け**: 効果が高く実現可能性の高いユースケース(例:問い合わせ自動返信、報告書自動作成)から優先的に導入します。 --- ### **2. 体制構築と意識改革** - **推進チーム発足**: 経営層・IT部門・各部門の代表者が連携し、予算・リソースを確保します。 - **教育プログラムの実施**: - **基礎理解**: 生成AIの仕組み・メリット・倫理リスク(情報漏洩、バイアス等)を研修で共有。 - **実践トレーニング**: プロンプト設計や出力検証の方法をハンズオン形式で習得させます。 --- ### **3. ツール選定とテスト運用** - **自社に適したAI選び**: - オープンソース(Llama 2等)/クラウドサービス(ChatGPT Enterprise、Google Vertex AI等)を比較。 - セキュリティ基準(データ暗号化、学習済みモデルのカスタマイズ可否)を軸に選定します。 - **PoC(概念実証)**: 限定範囲でテスト運用し、精度・業務効率化効果を検証します。 --- ### **4. ガバナンス整備** - **利用ガイドライン策定**: - 機密情報の入力禁止ルールや出力内容のダブルチェック体制を確立。 - 著作権・プライバシーに関する法的リスクに対応します。 - **モニタリング体制**: AIの出力品質や倫理面を定期的に監視し、改善を継続します。 --- ### **5. 社内文化の醸成** - **成功事例の共有**: 効果が出た部門のケースを社内発表やレポートで周知し、モチベーションを向上させます。 - **インセンティブ設計**: AI活用で業務効率化を達成したチームを表彰・報酬化します。 --- ### **6. 外部連携と進化対応** - **ベンダー・専門家との連携**: 最新技術動向の情報収集やカスタマイズ支援を受けます。 - **AIの継続的アップデート**: モデルの精度向上や規制変化に応じて活用方法を見直します。 --- ### **ポイント** - **経営層のコミットメント**: 投資対効果(ROI)の試算を示し、長期的な展望で支持を得ることが不可欠です。 - **現場主導の改善**: ユーザーである従業員のフィードバックを活用し、ツール改善やプロセス最適化を重ねます。 生成AIは「ツール」であり、活用の成否は「人間の使い方」に依存します。技術導入だけでなく、組織全体のデジタルリテラシー向上と心理的安全性の確保が鍵となります。 5. 実際に使ってみた感想・所感 レスポンス: 今回の回答には4分程度かかりました。LLM自体の推論にかかる時間と、利用が集中しているが故の処理時間の長さ、両方の影響がありそうです 出力精度: Thinkingを行う推論モデル故に抽象度が高かったり複雑度の高いプロンプトに対しても深い洞察を感じられる回答が返ってきます API利用では思考過程が タグに囲まれた形式で出力されます。これは使いやすいですね タグの中身を見ていくと、まるで複数人でディスカッションが行われたような思考過程を辿っています まとめ: Azure AI Foundryを用いると、最新のサードパーティ製モデルを手軽に試すことが可能です 「まず試す」ことが重要な生成AI領域において、この手軽さは非常にありがたいですね We Are Hiring! KINTOテクノロジーズでは、事業における生成AIの活用を推進する仲間を探しています。まずは気軽にカジュアル面談からの対応も可能です。少しでも興味のある方は以下のリンク、または XのDM などからご連絡ください。お待ちしております!! @ card ここまでお読みいただき、ありがとうございました!
アバター
こんにちは! KINTOテクノロジーズ(以下、KTC)の生成AI活用PJTで生成AIエバンジェリストをしている和田( @cognac_n )です。 Azure AI Foundryを用いてDeepSeek R1を試してみた 1. はじめに 最近、大規模言語モデル(LLM)の進化がめざましく、さまざまな企業が独自のLLMを発表しています。その中でも注目を集めているのが、中国のAI企業DeepSeekが開発した DeepSeek R1 です。 2025年1月30日、MicrosoftはDeepSeek R1モデルを Azure AI Foundry とGitHubで提供開始しました。そこで本記事では、Azure AI Foundryを使ってDeepSeek R1を実際に試してみた際の流れや感想をまとめてみます。 2. Azure AI Foundryとは Azure AI Foundry は、Microsoftが提供する 包括的なAI開発プラットフォーム です。開発者が生成AIソリューションやカスタムコパイロットを 構築・評価・展開 するためのツールとサービスを一元的に提供してくれます。 主な特徴 多様なAIモデルの活用: MicrosoftやOpenAI、Hugging Face、Meta、Mistralなど、パートナー企業から提供される幅広い最先端モデル・オープンソースモデルをサポート。今回のDeepSeek R1もこの一環として提供されました。 統合されたAIツールチェーン: 開発ライフサイクル全体を加速させるためのSDKやAPI、ポータルが用意されており、データ前処理からモデル推論、継続的モニタリングまで一貫した体験が得られます。 責任あるAIの実践: 評価機能や安全フィルター、セキュリティ制御を組み込み、AIの信頼性と透明性を高める仕組みを提供します。 エンタープライズ向けのスケーラビリティ: Azureのマネージドインフラ上で高い可用性と拡張性が確保されており、企業レベルのモニタリング・ガバナンスをサポートします。 @ card Azure AI Foundry ポータルでモデル カタログを通じてモデルを使用する場合、プロンプトや出力がMicrosoftやモデル プロバイダーと共有されることはないと記載されており、様々なモデルをセキュアに利用することが可能です。 Microsoft では、ホスティング インフラストラクチャと API エンドポイントを提供および管理します。 この "サービスとしてのモデル" (MaaS) シナリオでホストされるモデルは、Azure のデータ、プライバシー、セキュリティのコミットメントの対象です。 Azure AI Foundry に適用できる Azure コンプライアンス認証の詳細を確認してください。 Microsoft は、従量課金制推論用にデプロイされたモデル (MaaS) により送信および生成されるプロンプトと出力のデータ プロセッサとして機能します。 Microsoft では、これらのプロンプトと出力をモデル プロバイダーと共有しません。 また、Microsoft は、Microsoft、モデル プロバイダー、またはサード パーティのモデルをトレーニングまたは改善するために、これらのプロンプトと出力を使用しません。 @ card 3. Azure AI Foundry上でのDeepSeek R1環境構築 ここからは、実際にDeepSeek R1をAzure AI Foundry上で利用するための手順を紹介します。なお、細かい画面遷移や設定項目については現時点(2025/1/31)の仕様となります。 UIや操作の変更が頻繁に行われるため、可能な限り公式ドキュメントを引用する形とします。 前提準備・アカウント設定 Azureアカウントの用意 Azure AI Foundryへのアクセス @ card DeepSeek R1の導入手順 プロジェクトの作成: Azure AI Foundryポータル上で新規プロジェクトを作成します。DeepSeek-R1が使用可能なリージョンは[eastus2, westus3, northcentralus, eastus, southcentralus, westus]です。(2025/1/31時点) @ card DeepSeek R1モデルの選択: 「モデルカタログ」からDeepSeek R1を探し、モデルページに遷移します。 デプロイ名の設定と価格の確認: 任意のデプロイ名を設定します コンテンツフィルターのON/OFFを設定します 2025/1/31時点で、DeepSeek-R1は無料提供されています。料金は今後変更されることが予想されるため、最新の情報を逐次チェックしましょう。 DeepSeek R1 use is currently priced at $0, and use is subject to rate limits which may change at any time. Pricing may change, and your continued use will be subject to the new price. The model is in preview; a new deployment may be required for continued use. 4. デプロイとエンドポイント構築: Azure AI Foundryのワークスペース内で自動的にエンドポイントが生成され、モデルを呼び出すURIやAPIキーが発行されます。 4. DeepSeek R1を使った簡単なデモ ここでは、実際にDeepSeek R1の推論を試すためのコード例を紹介します。 必要なライブラリをインストールします。 pip install azure-ai-inference APIを利用するコードの例を以下に記載します。 Azure AI Foundryではコードサンプルも提供されるため、トライが非常に簡単です。 from azure.ai.inference import ChatCompletionsClient from azure.ai.inference.models import SystemMessage, UserMessage from azure.core.credentials import AzureKeyCredential client = ChatCompletionsClient( endpoint="<YOUR_TARGET_URI>", credential=AzureKeyCredential("<YOUR_KEY>"), ) deploy_name = "<YOUR_DEPLOY_NAME>" response = client.complete( messages=[ SystemMessage(content="You are a helpful assistant."), UserMessage( content="企業における生成AI利活用を推進する方法を考えてください。日本語で回答してください。" ), ], max_tokens=2048, model=deploy_name, ) print(response.choices[0].message.content) 実際の応答例 <think> まず、問題は何かを理解する必要があるね。企業が生成AIを使う推進方法について考えなきゃいけないんだ。 生成AIといってもいろんな用途があるから、まずは企業内での現状を把握しなきゃ。どの部門がAIを活用できそうか、どのような課題があるのかを調べる必要があるだろう。例えば、マーケティングや顧客サポート、業務自動化などが考えられるね。 次に、教育や研修が大事だと思う。従業員がAIの使い方を学べる機会を提供しないと、実際に活用するのは難しい。特に、AIに対する不安や理解不足を解消するため、ワークショップやトレーニングプログラムを実施した方がいいかも。 また、適切なツールの選定も必要。自社のニーズに合った生成AIのプラットフォームやツールを選ぶことが重要。場合によっては、カスタマイズが必要かもしれないし、セキュリティやデータプライバシーの問題にも対応しなきゃいけない。 管理層のサポートも不可欠だよね。経営陣が推進しないと予算やリソースが確保できない。AI導入のメリットを明確に伝えて、経営層の理解を得る必要がある。 さらに、成功事例を作ることが効果的。一部の部門で試験的に導入して成果を出せば、他の部門にも広がりやすい。結果を共有して、社内全体に広げていく戦略がいいかもしれない。 あと、倫理的なガイドラインやポリシーを整備することも重要。AIの誤用を防ぎ、責任ある活用を促すため、明確なルールを作る必要があるだろう。 継続的な評価と改善も忘れちゃいけない。導入後も効果を測定し、フィードバックを元に改善を重ねることで、持続的な推進が可能になる。 そういったポイントを整理して、具体的なステップを考えていく必要がありそうだな。 </think> 企業における生成AIの利活用を推進するためには、以下のステップを体系的に進めることが効果的です。 --- ### **1. 現状分析と目標設定** - **ニーズの特定**: 各部門(企画、営業、開発、サポートなど)の業務課題を洗い出し、生成AIが解決できる領域(ドキュメント作成、データ分析、顧客対応等)を明確化します。 - **優先順位付け**: 効果が高く実現可能性の高いユースケース(例:問い合わせ自動返信、報告書自動作成)から優先的に導入します。 --- ### **2. 体制構築と意識改革** - **推進チーム発足**: 経営層・IT部門・各部門の代表者が連携し、予算・リソースを確保します。 - **教育プログラムの実施**: - **基礎理解**: 生成AIの仕組み・メリット・倫理リスク(情報漏洩、バイアス等)を研修で共有。 - **実践トレーニング**: プロンプト設計や出力検証の方法をハンズオン形式で習得させます。 --- ### **3. ツール選定とテスト運用** - **自社に適したAI選び**: - オープンソース(Llama 2等)/クラウドサービス(ChatGPT Enterprise、Google Vertex AI等)を比較。 - セキュリティ基準(データ暗号化、学習済みモデルのカスタマイズ可否)を軸に選定します。 - **PoC(概念実証)**: 限定範囲でテスト運用し、精度・業務効率化効果を検証します。 --- ### **4. ガバナンス整備** - **利用ガイドライン策定**: - 機密情報の入力禁止ルールや出力内容のダブルチェック体制を確立。 - 著作権・プライバシーに関する法的リスクに対応します。 - **モニタリング体制**: AIの出力品質や倫理面を定期的に監視し、改善を継続します。 --- ### **5. 社内文化の醸成** - **成功事例の共有**: 効果が出た部門のケースを社内発表やレポートで周知し、モチベーションを向上させます。 - **インセンティブ設計**: AI活用で業務効率化を達成したチームを表彰・報酬化します。 --- ### **6. 外部連携と進化対応** - **ベンダー・専門家との連携**: 最新技術動向の情報収集やカスタマイズ支援を受けます。 - **AIの継続的アップデート**: モデルの精度向上や規制変化に応じて活用方法を見直します。 --- ### **ポイント** - **経営層のコミットメント**: 投資対効果(ROI)の試算を示し、長期的な展望で支持を得ることが不可欠です。 - **現場主導の改善**: ユーザーである従業員のフィードバックを活用し、ツール改善やプロセス最適化を重ねます。 生成AIは「ツール」であり、活用の成否は「人間の使い方」に依存します。技術導入だけでなく、組織全体のデジタルリテラシー向上と心理的安全性の確保が鍵となります。 5. 実際に使ってみた感想・所感 レスポンス: 今回の回答には4分程度かかりました。LLM自体の推論にかかる時間と、利用が集中しているが故の処理時間の長さ、両方の影響がありそうです 出力精度: Thinkingを行う推論モデル故に抽象度が高かったり複雑度の高いプロンプトに対しても深い洞察を感じられる回答が返ってきます API利用では思考過程が タグに囲まれた形式で出力されます。これは使いやすいですね タグの中身を見ていくと、まるで複数人でディスカッションが行われたような思考過程を辿っています まとめ: Azure AI Foundryを用いると、最新のサードパーティ製モデルを手軽に試すことが可能です 「まず試す」ことが重要な生成AI領域において、この手軽さは非常にありがたいですね We Are Hiring! KINTOテクノロジーズでは、事業における生成AIの活用を推進する仲間を探しています。まずは気軽にカジュアル面談からの対応も可能です。少しでも興味のある方は以下のリンク、または XのDM などからご連絡ください。お待ちしております!! @ card ここまでお読みいただき、ありがとうございました!
アバター
Introduction Hello, I’m Koji Nishiguchi, Manager of Analysis Group at KTC Data Analysis Division. Exploring how future-focused research and technology development can address real-world challenges is a key driver of innovation. However, a disconnect often arises between researchers and businesses regarding "research aimed at future needs" and "technologies ready for immediate on-site application." This gap makes it difficult for both parties to collaborate effectively. In this article, we will highlight our efforts to bridge this "matching gap" and share our vision for future initiatives. (Generated by Microsoft Copilot) Activity so far Toyota Motor Corporation operates a division called the Future Creation Center (FRC), dedicated to conducting future-focused research. Through a fortunate connection with the FRC, they joined us at the Data Analysis Division of KINTO Technologies (KTC) as collaborators, assisting with research challenges. This marked the beginning of our initiative. Initially, we addressed specific data science challenges and were able to advance the project smoothly as a novel research endeavor. In traditional data science issues, as described in textbooks, the focus is primarily on the predictions themselves as the main metric of evaluation. However, from a business perspective, the reliability and variability of predictions also play a crucial role. In other words, it is crucial to understand the confidence level of predictions. Over time, several challenges emerged, prompting us to expand our efforts further. At the start, these challenges were relatively manageable, and we were able to turn them into research projects that made meaningful contributions to business outcomes, with the valuable support of FRC. The challenges of matching However, a problem arose. It became clear that the constraints and priorities of the projects were different for each side. While both teams aimed to improve user experience, their goals were not the same: KTC focused on product commercialization, while FRC aimed to conduct research for the future, such as securing patents or publishing papers. These differences created challenges, such as mismatched timelines and varying expectations for the level of completion. To work within these constraints, we often had to focus on “technologies that were nice to have but not essential.” As a result, the developed technologies were deprioritized in practice and remained unused. This experience highlighted the challenges of establishing effective collaboration between the "research team" and the "business field." (Generated by Microsoft Copilot) Conducting the ideathon 1. Expectations for the ideathon As one of the solutions, we organized an ideathon. Ideathon is a workshop-style event designed to generate and develop new ideas for tackling challenges in a short period of time. This event brought together participants from both FRC and KTC, aiming to explore ways to leverage each other's technologies and research through open, creative discussions. In the medium and long term, we hoped this collaboration would help both parties identify opportunities for cooperation. For KTC, this meant gaining fresh perspective on how to apply new technologies, while FRC could discover potential "seeds" for future research projects. Purpose of the initiative Expectations for the ideathon Research side Research needed for the future Discovery of seeds for future projects Business side Technologies that can be used immediately Gaining new perspectives 2. Implementation process To prepare for the ideathon, FRC created a list of technologies that could be introduced. KTC members then provided feedback through a survey, and based on the results, two technologies were selected for the event. On the day of the ideathon, we began with an introduction of the two technologies, followed by a short Q&A session, and then moved into the main ideathon activity. 3. Event summary Date: September 2024 Schedule: 16:00-17:30 Study session (Introduction of two technologies) Technology A: Recommendation-related technology Technology B: Customer psychology measurement technology 17:40-19:00 Ideathon (25 minutes per table) Participants: ・FRC: 7 members (1 online) ・KTC: 11 members (3 online) During the ideathon, KTC participants were divided into three groups, rotating through three discussion tables: Technology A, Technology B, and a free discussion table. Each table consisted of 2-3 FRC members and 3-4 KTC members. Most participants were meeting for the first time, so it began with self-introductions and overviews of each participant’s work. This was followed by discussions focused on the table’s theme. The 25 minute sessions went by in no time, and in many cases, discussions were just starting to gain momentum when time ran out. Even so, a project with potential for matching emerged, and FRC researchers and KTC engineers quickly began aligning on the details. While we cannot share specifics, we felt that if KTC's pursuit of "customer understanding" and the research department’s technologies were well-aligned, meaningful progress could be made. 4. Feedback from participants and suggestions for improvement After the event, we conducted a survey among KTC participants. The satisfaction score was 4.11 out of 5. With the condition of "If the content is interesting," all participants said they would like to join the next event. This shows that the event was meaningful for everyone. However, the following suggestions for improvement were made: The explanations during the study session were too long. The audio quality in the online presentations was poor. The study session should include examples using actual KTC data. More concrete examples are needed. The ideathon sessions should be longer. The self-introductions and work overviews at the beginning of each ideathon session took too much time, leaving less time for discussion. We will take these suggestions into account to make future events smoother and more effective. Future prospects In addition to the FRC, the Toyota Group has many other research units. When opportunities arise, we aim to exchange ideas with other research teams and create spaces for effective matching. Specifically, we plan to clearly present the technologies and solutions required by the business field, while offering researchers opportunities to apply their work in real-world business contexts. At the same time, by communicating researchers’ interests, such as themes and challenges, to companies, we believe we can bridge the gap between research and business, fostering mutual understanding. Through these efforts, we aim to build a win-win relationship by increasing the number of cases where technologies are effectively applied in practical settings. To achieve this, we must continue to accumulate experience through ideathons and matching events, making collaboration between technology and research smoother. We believe this will speed up the process of bringing future-focused research into practical use and help create a better society. Conclusion To connect future-focused research with technologies needed today, it is important to create spaces where the needs and expectations of both sides can be understood. Through study sessions, ideathons, and workshops like those mentioned, we believe that fostering mutual understanding between researchers and the business side can pave the way for more effective and practical collaboration. By fostering such collaboration, we believe that integrating various new technologies into KTC’s vision of an "in-house development organization with a customer-centric perspective" , will enable us to deliver inspiring new experiences to our customers at the earliest. KTC will continue to work with FRC to develop new technologies and explore the practical applications. This is essential for us, as a leading mobility platform provider, to bring not just “mobility” but also “inspiration” to every journey. (Generated by Microsoft Copilot) <Unauthorized reproduction or use of the content, text, images, etc., on this site is strictly prohibited.>
アバター
Introduction Hello, I am Osanai, the leader of the SRE Team in the Platform Group at KINTO Technologies (hereinafter, KTC). In this article, I will discuss how we developed the SRE Team’s mission and vision. If you’re only interested in the finalized mission, feel free to skip directly to here . Why Did We Decide on a Mission and Vision? Three key factors influenced our decision to establish a mission and vision this time. 1. A suggestion from a team member The SRE Team was established in January 2021, but after various ups and downs, it became a one-person team by the end of March. In April, a new team member suggested creating a mission and vision for the team, which sparked the initiative. He mentioned that, in his previous job, the company’s mission and vision were well-integrated into daily operations and functioned effectively. However, at the time, I didn’t see an immediate need for it and largely dismissed the idea as something we could address later if time allowed. (Apologies!) 2. The need to present a team roadmap to senior leadership. At the time, the SRE team consisted of just two members with limited resources, so we were actively working on initiatives to enhance recruitment efforts. As part of this effort, it became essential to clearly communicate to senior leadership what the SRE Team aims to accomplish, the challenges we anticipate, and the number and types of personnel required to address them effectively. As part of this process, we decided to create a roadmap for the SRE Team. Additionally, we concluded that establishing a mission and vision would provide a higher-level framework to guide the team’s activities. 3. Diversification of the term “SRE” Based on that, we started looking at the missions and visions of various companies’ SRE teams. Then, we came across a slide like this: From “What Does SRE Aim to Do?” Personally, this struck a chord with me the most. At KTC, there are several other cross-functional teams and groups in addition to the SRE Team, such as Platform Engineering, Cloud Infrastructure, DBRE, CCoE, and Security. The term "SRE" encompasses a broad range of responsibilities, so I felt it was important to define our specific role and focus within the context of the existing environment around us. How Did We Decide? So, in light of all the above, we decided to come up with a mission and vision. However, having no guidelines on how to go about it, we were fumbling in the dark to begin with. First, regarding the time commitment and timeline for the decision-making process, we decided to complete it within a relatively short period. This was partly because we anticipated that the process might drag on if we chose to make decisions incrementally during weekly meetings. We considered dedicating a full day to the process but realized that our ideas might be overly influenced by the conditions on that particular day. Ultimately, we decided to allocate one hour per day over five business days, with a weekend in between, to allow for a more balanced and thoughtful approach. After going through the process, my personal impression is that spreading it out over several days was a good decision. It gave me time to reflect, and I came up with ideas while taking a bath and just before falling asleep. Next, in terms of how we decided, we opted to base our approach on Google re:Work . We proceeded by following “Set and communicate a team vision” in the “Managers” theme. This time, our goal was to formulate a mission and vision. To achieve this, we proceeded by defining the core values, purpose, and mission within the relevant sections. For the vision, we approached it from the perspective of imagining what we would want the SRE Team and the company as a whole to look like once the mission had been accomplished. Therefore, we chose to define the vision based on the mission we had established. Day 1: Identifying What Values We Want to Cherish On the first day, we decided to start by identifying the values that each team member personally wants to cherish. Using Miro as our collaboration tool, each of us wrote on sticky notes the things we want to cherish, encompassing both technical and non-technical aspects. Coming up with ideas about the things you want to cherish can be quite challenging, so I realized that an alternative approach could be to identify ideas by considering the outcomes you would most want to avoid. After this, we had an open discussion about the values embodied by people we admire. Day 2: Doing a Deep Dive into the Core Values of the Team On the second day, we each shared and discussed the values we had identified on the first day. Even when the details differed, we discovered that by gradually abstracting our thoughts through questions like “Why do you want to cherish that value?”—similar to a why-why analysis—we arrived at shared underlying values. Taking note of these insights as you go could be helpful when defining your own mission. Next, we discussed the values we resonated with and brainstormed concrete behaviors that reflect those values. Since we were doing it in a pair this time, we picked out some values that we could relate to from among each other’s, and did a deep dive into those. For example, one of the values was along the lines of “Produce better output through good collaboration,” but that sounded a little abstract. So, we reworded it more concretely by asking what “good collaboration” and “better output” actually meant, and as we did so, we got a much clearer image of what it was about. Day 3: Exploring the Reason Why the Team Exists On the third day, we thought about the purpose (the reason why the team exists). We collaboratively answered six questions under the theme “Why Does This Team Exist?” and discussed our ideas along the way. One thing I felt requires some caution is that these questions focus on the current status quo, so people's answers about the fundamental reason for the team's existence might be biased especially if they are motivated to bring about changes to the current organizational situation. By reflecting on what we had done so far, abstracting those actions, and revisiting the reasons behind them, we started to uncover a potential candidate for the team’s fundamental reason for existence. Day 4: Deciding on a Mission On the fourth day, we finally decided on our mission. First, we each wrote on sticky notes our own thoughts about three questions, as a form of self-reflection . Then, we decided on a mission in light of everything we had done since the first day. To be honest, there were moments when we relied on inspiration, but we reviewed all the activities and conversations we had engaged in up to that point. From those, we identified phrases that seemed to stand out as potential keywords and crafted a mission statement that encapsulated and satisfied them. In making our decision, we also kept in mind whether our ideas aligned with the five key characteristics that a mission should have. Day 5: Deciding on a Vision On the fifth day, we decided on a vision by imagining what we wanted the SRE Team and the company as a whole to be like once we had achieved the mission. The Miro board we actually created (just the atmosphere of it) The Mission and Vision We Decided On The mission and vision we actually decided on are as follows: To explain the mission, first, I will look at the part about being able to provide products as fast as possible. KTC has a wide variety of products. We would like to enable them to provide their functions to users as quickly as possible, and create an environment where we can get feedback. However, just providing products quickly is not enough: we also need to deliver to users products that will be of value to them. In addition, no matter how quickly you can provide a valuable product, it will be meaningless if the users cannot use it to their satisfaction. So, we added the word “reliable.” For our vision, we envisioned what KTC would look like once it achieves the ability to provide reliable, highly valuable products as quickly as possible. We concluded that to simultaneously fulfill the quality-focused aspect of "reliable, highly valuable products" and the speed emphasized in "provide [them] as fast as possible," it is essential to strike the right balance between development and operations, guided by service level requirements. In Conclusion We managed to formulate a mission and vision for our team without a hitch. Although it's still early days since we established it, conversations are already emerging around questions like, "Considering our mission, is this something we should really pursue?" and, "If we decide to do this, to what extent should we take it?" This gives me the sense that the mission is beginning to serve effectively as a guiding framework for our team. That said, simply formulating them now doesn’t mean this is the end of the journey. I would like us to develop a roadmap for achieving our mission and vision and work together toward them as a unified team. Also, the SRE Team is looking for people to work with us. If this has sparked even a slight interest, please don’t hesitate to reach out to us. We look forward to hearing from you! https://hrmos.co/pages/kinto-technologies/jobs/1811937538128224258
アバター
はじめに こんにちは!iOSエンジニアのViacheslav Voronaです。今年の8月に東京で開催されたiOS開発者向けカンファレンス iOSDC Japan 2024 に参加しました。 セッションがほとんど日本語で行われるカンファレンスには初めて参加したので、少し緊張していましたが、実際に参加してみると、思ったよりもスムーズに理解できました。コードスニペットを見たり、自分がある程度理解しているトピックについて話を聞いたりすると、たとえすべての言葉を聞き取れなくても、意外と内容を追うことができました。スライドに英訳を入れてくれた方々に感謝です!これ以上ないくらい助かりました❤️ 今回はいくつかのセッションに参加しました。その中には、登壇者が趣味として取り組んでいるプロジェクトを紹介するものもありました。例えば、 ta.inoue さんの セッション では、GPSの仕組みを解説しつつ、 iOSデバイスを使って送信データのスニッフィングを実演していて、とても興味深い内容でした。GPSは原理としてはシンプルなのに、実際の構築は驚くほど複雑であるという点がいつも私を魅了します。このセッションも期待を裏切らないものでした。 また、 haseken さんによるUIKitのさまざまな ViewControllersの歴史や役割を 深堀り したセッションも印象的でした。日々の業務でViewControllerを使うことは多いですが、UIKitは非常に幅広く、用途が限られたView Controllerの中には知られていないものも結構あります(私も初めて知るものがいくつかありました)。このセッションもとても興味深かったです。 今日は、これらのセッションの中から、実務で役立つ可能性があり、今後も心に留めておきたいと感じたポイントをいくつかご紹介したいと思います。それでは、いってみましょう! 隠れたAPIを活用する実践例 どの開発者も、時おり、プロジェクトで使用しているサードパーティのコードを詳しく調べる必要に迫られることがあります。新しいプロジェクトのニーズにそのライブラリやフレームワークが合っているか確認するため、既存のツールをより深く理解するため、時にはサードパーティのコードのバグを追跡して報告するためなど、その理由はさまざまです。 しかし、私たちが日々使用していながらも謎めいているのが iOSフレームワーク(SwiftUIやUIKitなど)の存在です。これらのフレームワークには、十分に説明され推奨されているAPIだけではなく、まだ知られていない多くの機能が隠されています。こうした機能を見つけて活用することは、単なる興味深い挑戦にとどまらず、特定の場面でiOS開発者にとって有益な結果をもたらす可能性があります。これが noppe さんによるセッション 「iOSの隠されたAPIを解明し、開発効率を向上させる方法」 のテーマでした。 noppeさんのプレゼンテーションは次の3つのパートに分かれていました。 Perform(発見) :最初のパートでは、隠されたメソッドをどのように見つけるかについて簡単に説明がありました。主な手法は以下の通りです。Objective-C: 興味のあるクラスの .h ファイルを置き換える。Swift: .tbd や .swiftinterface ファイルを調整する。 Use Case(ユースケース) :2つ目のパートでは、隠れたAPIを使う可能性のあるシナリオが紹介されました。このパートが最も長く、以下のような具体的なケースが挙げられていました。 プロトタイピング: 便利で安全。非クリティカルなコードであれば、効率を優先して活用できる。たとえAPIが変更されても機能に影響しない。 テスト: 便利で比較的安全。特定のテストシナリオを簡単にカバーできる。ただし、非公開API自体にもテストを追加して、変更をすぐに検出できるようにするのが望ましい。 本番環境 : 明らかに安全ではない。将来的にAPIの挙動が変わる可能性があるうえ、App Storeの審査でリジェクトされるリスクもある。 Find(発見方法) :最後のパートでは、隠されたAPIを発見する具体的な方法について説明されました。主な手段として、 .h 、 .tbd 、 .swiftinterface ファイルの確認や、スタックトレースの分析、さらにコミュニティでの情報共有が挙げられています。 noppeさんのプレゼンテーションでは、隠されたAPIが役立つ具体例もいくつか紹介されましたので見ていきましょう。 プロトタイピングの段階で印象的だったのは、 UITextView に関連する例です。ご存じの通り、テキストビューにプレースホルダーを設定する機能は一般的ですが、Appleはそれを直接設定するためのAPIを提供していません。そのため、通常はラベルをサブビューとして追加するなど、カスタムソリューションを作成する必要があります。そこで登場するのが UITextView です。公開されていない setAttributedPlaceholder というメソッドがあり、これを使えばプレースホルダーを簡単に設定できます。このメソッドは本番環境での使用は許可されていませんが、プロトタイピングや概念実証の段階では時間を節約することができます。 テストに関連する例として挙げられたのは、 UIDebuggingInformationOverlay です。このツールは、iOS 11以前では簡単に使えましたが、現在ではアクセスするために 少し工夫 が必要です。それでも、UIKitの隠れた低レベルな機能を使えば、有効化することが可能です。 UIDebuggingInformationOverlay 非公開APIについて学ぶことは、普段使っているツールへの理解を深める良い方法です。本番環境での使用は推奨されませんが、そうしたAPIの存在を知ることで、通常とは異なる解決策が求められる場面で新たな可能性が広がります。また、それらの仕組みを知ることで、自分のAPI設計スキルを磨くことにもつながります。 このセッションは非常に実用的で、多くの刺激を受けました。参加できて本当に良かったと思います。 @ speakerdeck Server-driven UI 開発者として、私たちは常に業界の最新トレンドを追いかけています。それは、新しいフレームワークや技術をいち早く試してみたいという好奇心からだったり、同僚に遅れをとりたくないという理由だったりします。しかし、今年のiOSDCで私は、これまでほとんど知らかったデザインパラダイムについてのセッションに出会いました。その手法自体は以前から存在していたものの、これまで深く理解する機会がありませんでした。セッションはNadeさんによる サーバードリブンUI入門: 画面のStateを直接受け取るアプローチ というものでした。サーバードリブンUIは少しニッチな技術に思えますが、その存在を知り、注目しておくだけでも価値があると感じました。 サーバードリブンUIとは、UIの状態をバックエンドから直接受け取る仕組みです。このアプローチを使えば、アプリの新しいバージョンをリリースすることなくクライアントサイドのUIを変更できます。また、クライアント側のビジネスロジックを減らすという点でも非常に有用です。仕組みとしては、クライアントアプリ内にシンプルで再利用可能なUIコンポーネントをあらかじめ実装しておき、それらをサーバーのレスポンスによって制御します。サーバー側で、どのコンポーネントをどの画面に、どの順序で表示するのか、そしてそれぞれにどんな内容を含めるのかを指示します。 iOSの場合、SwiftUIを使用すると、ビューが Codable プロトコルに準拠できるため、サーバーの応答をそのまま反映させる形で実装することができ、このアプローチはより自然に感じられます。Nadeさんのプレゼンテーションでは、この点が明確に示されていました。 もちろん、このようなパラダイムを採用することで、伝統的な実装と比べていくつかのトレードオフが生じます。それでは、それらを簡単に見ていきましょう。 Server-driven UI の利点: 新しい機能をリリースする際にアプリの新バージョンを配布する必要がなく、バックエンド側で変更を加えるだけで済むため、リリースサイクルが短縮されます。 ビジネスロジックが全てのプラットフォームで統一されることで、 DRYer コードが実現します。これにより、 変更が即座に全てのクライアントに反映されます。 クライアント側でのビジネスロジックが最小限(理想的にはゼロ)になるため、開発者はUIやUXの細部に集中し、よりスムーズな体験を提供することに注力できます。 Server-driven UI の課題 事前に多くの汎用的なUI要素を定義して構築する必要があり、小規模なチームにとっては負担が大きい。 これらの汎用UI要素のすべての組み合わせをテストするのには、多大なコストがかかる。 サーバーサイドのアーキテクチャが、追加の「Backend for Frontend」レイヤーによりさらに複雑化する。 このように、サーバードリブンUIは、バックエンドの複雑さやインフラを管理できる大規模なチームに最も適したソリューションですが、クライアントの柔軟性や配信速度の向上という点では大きなメリットを提供できます。 @ speakerdeck 最後に iOSDC Japan 2024は、私にとってとても楽しめたうえに有益な体験でした。今回ご紹介した2つのセッションは、私が参加した中でも特に実務で役立つ内容でした。もし興味を持たれたなら、iOSDCの主催者がセッション録画を公開しているので、ぜひチェックしてみてください。 最後までお読みいただき、ありがとうございました。それでは、また次回!
アバター
This article is part of day 3 of KINTO Technologies Advent Calendar 2024 .🎅🎄 I am Somi, a Flutter application developer at KINTO Technologies (hereinafter, KTC). Flutter is an appealing framework that enables you to construct a diverse range of UIs independently of the platform. In particular, with CustomPaint, you can easily produce intricate designs that would be difficult to achieve with the basic widgets alone. Recently, when implementing a QR code recognition screen, an issue arose with creating a border for the recognition area. We tried to use the existing libraries, but there was a limit to how well they could achieve the curve design we wanted. Therefore, we solved the problem by drawing the border directly using CustomPaint and Path . In this article, I will detail what steps we took to complete the border for the QR code recognition screen using CustomPaint and Path. The Goal of the Border Design The border we implemented this time is a curved, translucent white one around the four corners of the QR code recognition area. Using the CustomPainter class , we defined a path on the Canvas with Path, then drew the border using a combination of curves and straight lines. Preparations for Drawing the Border Using CustomPainter First, we define a class for drawing the border, namely, _OverlayPainter . This class extends CustomPainter, and will be responsible for drawing the border on the Canvas. The following is sample code for drawing a border using an already defined _OverlayPainter. Later, I will explain in detail about the specifics of implementing it. class QrScanPageContent extends StatelessWidget { const QrScanPageContent({super.key}); @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: const Text("QR Code Scanner"), // Screen title ), body: CustomPaint( size: Size.infinite, // Draw it to fit the size of the whole screen painter: _OverlayPainter( squareSize: 200.0, // Size of the border area borderRadius: 20.0, // Roundness of the border’s corners borderThickness: 8.0, // Border thickness ), ), ); } } Setting up the background and recognition area We will create in specific detail the _OverlayPainter I mentioned above. First, we draw the background color and the QR code recognition area. We draw the background as a semi-transparent rectangle using the drawRect method, and the QR code recognition area as a rounded rectangle using the drawRRect method. For drawing each, we set the style (color and transparency) using the Paint class. In the next section, I will explain how to draw the border in detail. class _OverlayPainter extends CustomPainter { final double squareSize; final double borderRadius; final double borderThickness; _OverlayPainter({ required this.squareSize, required this.borderRadius, required this.borderThickness, }); @override void paint(Canvas canvas, Size size) { final centerX = size.width / 2; final centerY = size.height / 2; // Draw the background final backgroundPaint = Paint()..color = Colors.grey.withOpacity(0.5); canvas.drawRect( Rect.fromLTWH(0, 0, size.width, size.height), backgroundPaint); // Draw the recognition area final rect = RRect.fromRectAndRadius( Rect.fromCenter( center: Offset(centerX, centerY), width: squareSize, height: squareSize, ), Radius.circular(borderRadius), ); final innerPaint = Paint()..color = Colors.lightBlue.withOpacity(0.1); canvas.drawRRect(rect, innerPaint); // Here, we set the frame’s style and draw the frame. } @override bool shouldRepaint(covariant CustomPainter oldDelegate) => false; } The shouldRepaint method decides whether this CustomPainter requires redrawing. In this example, the background color and the size of the recognition area are fixed, so redrawing is unnecessary. Consequently, this method always returns false. However, if you want to draw dynamically or change the size or shape, you need to set this method to true. Setting the border style Next, to get ready to draw the border, we set the line style. Before drawing the border, we define the style using a Paint object. The Paint class provides tools for setting things like the color, thickness, and shape of a line. Here, we set the border to translucent white and define the line shape to be rounded. By setting the border to a translucent white color, we have ensured that the important recognition area can be spotted at a glance. final borderPaint = Paint() ..color = Colors.white.withOpacity(0.5) // Set the border color and transparency ..style = PaintingStyle.stroke // Set the outer border style ..strokeWidth = borderThickness // Line thickness ..strokeCap = StrokeCap.round; // Set the ends of the line to be rounded Calculating the coordinates and sizes To draw the border, we first need to calculate the coordinates and size of each corner. This will enable us to define the start and end points of each corner precisely. The following is an example calculation: const double cornerLength = 55; // Length of each corner double halfSquareSize = squareSize / 2; // Size of half of the recognition area double left = centerX - halfSquareSize; // Left boundary double right = centerX + halfSquareSize; // Right boundary Double top = centerY - halfSquareSize; // Top boundary double bottom = centerY + halfSquareSize; // Bottom boundary Coordinate system : In Flutter’s Canvas coordinate system, the top left is (0, 0). This means that the top side is calculated using centerY - halfSquareSize , and the bottom side is defined as centerY + halfSquareSize . cornerLength : Define what length of straight line to draw at each corner. halfSquareSize : Calculate the size of half of the QR code recognition area. left, right, top, bottom : Define the boundary coordinates of the recognition area with respect to the coordinates of the center. Pictorially, the above formulas look like the following figure. Drawing the Border First, we start to draw from the top left corner. To draw the top left corner, we define a path using the Path class. Path is a handy class that lets you specify a variety of shapes like lines, curves, and arcs, then draw them on the Canvas. 1. Draw a straight line from right to left We move the start point to the top end of the corner, then draw a straight line going left. Path topLeftPath = Path(); // Define a new path topLeftPath.moveTo(left + cornerLength, top); topLeftPath.lineTo(left + borderRadius, top); The above code results in drawing a line like the one below. 2. Draw a corner curve We add a curve that starts from an end point of the straight line. Using the arcToPoint method, we draw a curve from the start point to a specified end point. This enables us to create a natural join from the straight line to the curve. In the code below, we set the curve’s end point with Offset and its radius with Radius, giving us a rounded corner for the QR code area. topLeftPath.arcToPoint( Offset(left, top + borderRadius), // End point of the curve radius: Radius.circular(borderRadius), // Radius of the curve clockwise: false, // Draw the curve counterclockwise ); The code above generates a curve with a rounded corner, as shown below. 3. Draw a vertical straight line We extend a line downward from the endpoint of the curve. topLeftPath.lineTo(left, top + cornerLength); The code above adds a vertical line, as shown below. 4. Draw the path on the Canvas We use the defined Path to draw on the Canvas with borderPaint. canvas.drawPath(topLeftPath, borderPaint); Processing the remaining corners The following is a code example demonstrates how to draw the other three corners after the top left one: // Bottom left corner final bottomLeftPath = Path() ..moveTo(left + cornerLength, bottom) ..lineTo(left + borderRadius, bottom) ..arcToPoint( Offset(left, bottom - borderRadius), radius: Radius.circular(borderRadius), clockwise: true, ) ..lineTo(left, bottom - cornerLength); canvas.drawPath(bottomLeftPath, borderPaint); // Bottom right corner final bottomRightPath = Path() ..moveTo(right - cornerLength, bottom) ..lineTo(right - borderRadius, bottom) ..arcToPoint( Offset(right, bottom - borderRadius), radius: Radius.circular(borderRadius), clockwise: false, ) ..lineTo(right, bottom - cornerLength); canvas.drawPath(bottomRightPath, borderPaint); // Top right corner final topRightPath = Path() ..moveTo(right - cornerLength, top) ..lineTo(right - borderRadius, top) ..arcToPoint( Offset(right, top + borderRadius), radius: Radius.circular(borderRadius), clockwise: true, ) ..lineTo(right, top + cornerLength); canvas.drawPath(topRightPath, borderPaint); Summary Besides precise designs like the QR code border, using CustomPaint and the Path class enables you to create even more complex UI designs as well. Implementing the border for the QR code recognition screen ourselves reaffirmed how flexible Flutter is, and power of its Canvas functionality. However, when using CustomPainter, keep in mind that more complex your drawing logic can impact performance. If frequent redrawing is necessary, consider optimizing the processing and utilizing other existing widgets. I hope this article can serve as a useful reference for implementing UI design using CustomPaint and Path.
アバター
This article is part of day 4 of KINTO Technologies Advent Calendar 2024 Merry Christmas ✌️ Ryomm, who works on developing My Route (iOS) at KINTO Technologies (referred to as KTC), is stepping into a new role this time! Allow me to introduce "Manabyi," an incredibly exciting Slack bot created in collaboration with the Michi-no-eki project, as I take on the role of a phantom bot craftsman. What is manabyi It's an incredibly exciting Slack bot designed for gathering information on internal study sessions and events, as well as leveraging the collected data effectively. It contains all of the event-related aspects. ![Manabyi](/assets/blog/authors/ryomm/2024-12-04/01.png =200x) There are two main roles of Manabyi: Search for events Register and publicize new events When a new launch event is entered into Manabyi, it automatically announces the event to the relevant channels. Users can stay informed about internal events either by following the designated channels or by reaching out directly to Manabyi. You don't need to be a direct stakeholder to access the information. The use of Manabyi will be covered in the [Learning Road Station Series] on a later date. In this article, I'll focus on introducing the technology behind Manabyi. Manabyi's technique Manabyi is created using the Slack CLI. https://api.slack.com/automation/quickstart The Slack CLI simplifies the process by eliminating the need to build infrastructure, such as setting up a datastore on Slack's side. I also appreciate that the development environment can be set up directly on the Slack CLI side. The general structure of Manabyi is outlined as follows: Trigger There are three triggers in the manabyi. Features The type of trigger Add an event Link trigger Delete an event Link trigger Search for events Event trigger ![Link Trigger](/assets/blog/authors/ryomm/2024-12-04/03.png =500x) ![Link Trigger Form](/assets/blog/authors/ryomm/2024-12-04/04.png =500x) ![Event Trigger](/assets/blog/authors/ryomm/2024-12-04/02.png =500x) There are four types of workflow triggers in Slack. Trigger name Description Link trigger Once created, a URL will be issued and executed when the link is clicked in Slack (not valid outside Slack) Scheduled trigger Run in hours Event trigger Mentions and reactions Webhook trigger Executed when a specific URL receives a POST request https://api.slack.com/automation/triggers Manabyi is primarily designed for users to utilize the search function, while the add/delete functions are intended exclusively for event organizers. To prevent accidental additions or deletions of events, we have separated the types of triggers accordingly. When you register for an event, notifications are sent to two channels: the channel that invoked the bot and the channel designated for promoting the subscription. ( #notice-new-event ). ![Notifications](/assets/blog/authors/ryomm/2024-12-04/05.png =500x) This ensures that you're informed about newly launched events, regardless of the channel where they were created. Of course, you can also retrieve information by interacting with Manabyi using keywords of interest. Block Kit Slack uses a framework called Block Kit to create rich visual messages. You can experience it with the following tool called Block Kit Builder, so if you have never used it before, please give it a try. https://api.slack.com/tools/block-kit-builder The result of getting event information is also easier to use Block Kit. https://api.slack.com/block-kit ![Message using Block Kit](/assets/blog/authors/ryomm/2024-12-04/08.png =500x) The reference page links, such as those directing to a Slack channel, a Confluence page, or a kick-off video, can often be lengthy and cluttered. To address this, we streamlined the experience by embedding them under a "Details" button. The description text field is configured to use the rich_text type to accommodate requests for including links alongside plain text. Slack bot development tips corner Start by deciding whether or not to use Block Kit when sending messages. Initially, I used the string type but later decided to switch to the rich_text type. However, since the string type and rich_text type are not compatible, migrating the datastore was not a straightforward process. Moreover, once the registered data is converted to rich_text, I want to incorporate features like line breaks and links! After considering various factors, including those mentioned, we decided to take a bold approach by clearing the data in the datastore and starting fresh. Super exciting! If you can decide, it is a good idea to decide whether to use the Block Kit first. https://api.slack.com/automation/datastores Be careful with block_id For more information, go to this article: [ Slack CLI ] block_id conflicts and cannot send messages to Slack Workflows run only at build time For testing purposes, when creating a method that uses a UUID, you should pass the id as an argument from outside the method. const addEventFunctionStep = AddEventWorkflow.addStep( AddEventFunction, { id: Crypto.randomUUID(), // you want to generate an ID on the caller of the method title: formData.outputs.fields.title } ) However, the Workflow definition is executed only during build time, while the function is executed during subsequent calls. For this reason, any code that needs to be executed each time, such as generating a UUID, should be included within your method. Documents are too hard to find The Slack API documentation can generally be applied as is without modifications. In particular, I often explore what types can be used in trigger inputs, forms, and datastores. For you, this document is the solution! https://api.slack.com/automation/types Technology that supports manabyi Manabyi is currently treated as an inner source, and since the development environment is well-prepared, I’ll provide an introduction. CI/CD Test Slack CLI runs on Deno, so you can run tests with deno test . name: Slack App Test on: pull_request: types: [opened, synchronize, reopened, ready_for_review] jobs: build: runs-on: ubuntu-latest timeout-minutes: 5 steps: - uses: actions/checkout@v4 - name: Install Deno runtime uses: denoland/setup-deno@v1 with: deno-version: v1.x - name: Install Slack CLI if: steps.cache-slack.outputs.cache-hit != 'true' run: | curl -fsSL https://downloads.slack-edge.com/slack-cli/install.sh | bash - name: Test the app run: | cd app/ deno test --no-check Deploy You can get a service token starting with xoxp- by running the slack auth token command in your console. Register this as a secret on GitHub. The workflow is defined as follows: name: ‍➡️ Slack App Deploy on: push: branches: [ main ] workflow_dispatch: jobs: build: runs-on: ubuntu-latest timeout-minutes: 5 steps: - uses: actions/checkout@v4 - name: Install Deno runtime uses: denoland/setup-deno@v1 with: deno-version: v1.x - name: Install Slack CLI if: steps.cache-slack.outputs.cache-hit != 'true' run: | curl -fsSL https://downloads.slack-edge.com/slack-cli/install.sh | bash - name: Deploy the app env: SLACK_SERVICE_TOKEN: ${{ secrets.SLACK_SERVICE_TOKEN }} run: | cd app/ slack deploy -s --token $SLACK_SERVICE_TOKEN You can now deploy regular app code updates. However, it's important to note that trigger updates and datastore configuration changes cannot be automated, so these deployments must be handled manually. Issue Template Templates are provided to facilitate the submission of bugs, feature requests, and questions. To be honest, it's rarely used, but I appreciate it because it feels very OSS-like. Project We use the GitHub Project. We primarily use Jira for our internal projects, but since Manabyi doesn't utilize Confluence, it's more convenient to submit issues on GitHub. (I also wanted the team to experience the benefits of GitHub Projects.) Therefore, I decided to manage everything entirely on GitHub. Since I belong to the Mobile Development Group, my focus is primarily on Swift and Kotlin. As a result, many team members find writing TypeScript to be a challenging hurdle. Additionally, it's challenging to raise awareness about the existence of such a project outside the department, leaving many issues to address as an inner-source project. I am setting up a development environment with the hope that, as Manabyi continues to grow, someone will contribute to it someday. Conclusion That was an introduction to Manabyi. Manabyi is still in its infancy, but I hope it will grow alongside the culture of KTC...!
アバター
Introduction Hello! I am high-g ( @high_g_engineer ) from the New Car Subscription Development Group at the Osaka Tech Lab. In this article, I reflect on the Developers Summit 2024 KANSAI, held on September 18, 2024. What Is Dev Summit KANSAI? The original Developers Summit (hereinafter, Dev Summit) is a conference-type event for software developers that has been held annually in Tokyo and online since 2003. Dev Summit KANSAI has been held since 2011 as a spin-off event of that, and is a much-loved festival for IT engineers in Kansai. The theme for 2024 was “Let’s create the new standard together.” The sessions covered a wide range of topics, including security, AI, development methods, development productivity, DevOps, and engineer careers. On the day, all the sessions were virtually packed, and with those and the sponsor booths combined, the whole event was really bustling. https://event.shoeisha.jp/devsumi/20240918 Osaka Tech Lab’s First Time a Sponsor Our company participated in the event by setting up a sponsor booth and taking the podium for a sponsor session. Starting this fiscal year, KINTO Technologies has been sponsoring a variety of events, but this was Osaka Tech Lab’s first foray into sponsoring an event (that’s how we call KINTO Technologies’ Osaka office.) So we needed to think about: What kind of booth to set up The novelty items to prepare How to guide attendees at the booth To that end, we had meticulous discussions to make sure we were fully prepared in time for the big day. The Sponsor Booth Here is our completed booth! I would like to take this opportunity to touch on a few items in particular. ![KINTO Technologies’ completed sponsor booth](/assets/blog/authors/high-g/20241030/img4.jpg =512x) The Osaka Tech Lab Seen through Data First, I will talk about the board with all the graphs and figures on it in the center of the image. This is a visual representation of information gleaned from talking to Osaka Tech Lab’s members. An interesting feature is how the overall design mimics the parts of a plastic model kit! Our company’s designers created it quickly, but the final product is very polished. ![Board with Osaka Tech Lab represented visually through graphs](/assets/blog/authors/high-g/20241030/img5.jpg =512x) Survey board Next is the Survey Board on the right of the photo. This is a board for getting survey responses from booth visitors. We gave each visitor a sticker and asked them to place it on the job categories on the board that applied to them, helping visualize the IT domains they belong to. ![Board for a sticker survey](/assets/blog/authors/high-g/20241030/img6.jpg =512x) Survey Results The survey results revealed that many visitors are somewhat familiar with KINTO and KINTO Technologies, but there are still numerous target groups that we need to engage with Regarding occupations, many people placed stickers between 'back-end' and 'front-end.' While the nature of the event may have influenced this, it also highlighted that many engineers in Kansai tend to be more generalists than specialists. I believe we gathered valuable data to help expand awareness of the Osaka Tech Lab Thank you to everyone who cooperated with our survey board! ![Survey results](/assets/blog/authors/high-g/20241030/img7.jpg =512x) Kumobii Stuffed Toy There is a white, fluffy toy sitting on the left in the photo. This is KINTO’s official mascot character, Kumobii . Absolutely adorable, right? It was a huge hit with booth visitors, and Findy and KIKKAKE even posted it on their X accounts. Thank you so much! ![Twitter image: Findy](/assets/blog/authors/high-g/20241030/img8.jpg =512x) ![Twitter image: Kikkake](/assets/blog/authors/high-g/20241030/img9.jpg =512x) Novelty Item On the day, we gave the first 100 visitors to our booth a freebie multi-card tool (a tool set consisting of an aluminum card that can serve multiple roles). We also handed out Kumobii-shaped clips made of paper to people who came along to our sponsor session. ![Multi-card tool](/assets/blog/authors/high-g/20241030/img10.jpg =512x) ![Kumobii paper clip](/assets/blog/authors/high-g/20241030/img11.jpg =512x) Sponsor Session The speaker was Okita-san, who has been a member of the Osaka Tech Lab since it was established. The session theme was “Pursuing a Dual Career as a Project Manager and a Mobile App Engineer to Create the Future of Mobility from Osaka!: Challenges the Osaka Tech Lab and I Get to Take On in the Toyota Group.” The following is an overview of the session. An introduction to Okita-san’s career to date Carving out a career as a project manager who doubles as a mobile app engineer, something unprecedented in the company Sowing seeds for launching, developing, and operating products at Osaka Tech Lab The phrase “never forget your passion” left a big impression on me, and it struck me that the career Okita-san currently has is the fruit of always having soldiered on instead of just giving up and quitting, even when motivation was low. Closing Thoughts This was our first time taking part in Dev Summit KANSAI, and I was truly amazed at how many people were there despite it being a weekday. I got to go along to some of the sessions myself in between helping out with our booth, and I learned a lot, got lots of ideas for my next steps, and enjoyed the whole event very much indeed. Large-scale conferences that encompass a wide range of technical fields are rare in Kansai, so I hope this level of passion continues indefinitely. I would like to express my gratitude to all the staff involved in running Dev Summit KANSAI. Thank you very much.
アバター
This article is the second post in the KINTO Technologies Advent Calendar 2024 . Introduction to the background of our activities Hello. I'm “Kinchan” from the Manabi-no-Michi-no-Eki team. I usually work as a corporate engineer, maintaining and managing the IT system used by our entire company. Recently, I have been involved in various activities, including as a member of organizations such as the "Generative AI Utilization Project" and the "Technical Public Relations Group." In a previous tech blog , I introduced the origin of the "Manabi-no-Michi-no-Eki." It also described our "in-house podcasting" activities. In this article, I will provide a detailed introduction to our podcast initiative. This podcast initiative began with the following idea from HOKA-san, a member of the Manabi-no-Michi-no-Eki: I want to share various in-house activities through podcasts! In addition to HOKA-san's very simple motivation, our team was made up of members who basically never said “No,” which led us to pursue it as part of the Manabi-no-Michi-no-Eki team activities. Of course, since this is an initiative of the Manabi-no-Michi-no-Eki team, it is important to effectively combine it with “information dissemination for learning purposes.” After some discussions, we arrived at the following conclusion. Conducting interviews with persons who are hosting, managing, or participating in in-house study groups. And disseminating the interview results. Our purposes were as follows: To answer the question, "There are many study groups in the company, but who are organizing them and what motivates them?" To let our audience hear first-hand voices from those who have actually participated in in-house study groups and feel their value. If someone is interested, this could be an opportunity for them to join a study group. By making the existence of various study groups visualizing/audible, a "culture of learning" will take root within the company. "What is a podcast?" Even though we have decided to start podcasting, we had questions like, "What should we do to make a podcast?" Is it something that can be released in the form of Internet radio? (Will people outside the company listen to it?) Does it have to be distributed using a dedicated app? (This could involve preparation efforts by both the distributor and the listeners...) What kind of equipment is required? The more we thought about it, the more obstacles we found. That said, our goal is simply to "deliver in-house information to the rest of the company," so we decided to proceed with an "agile mindset" of "first using an available system to make a prototype to test out the feel," and then "receiving feedback and making kaizen. Now it's time for the first interview! Once that's decided, it's time to choose the study group to interview! It just so happened that a large-scale study session called a "joint study session" was scheduled within the company, so we decided to attend this study session first and conduct interviews with its management team members. However, on the day of the study session, things got hectic, and we were unable to record the interview. As a result, we decided to take an alternative approach by having the management team members gather at a later date for an interview. (In fact, this alternative approach became the model for our future podcasts.) The actual interview was very exciting, and we were able to complete the recording successfully. However, we then faced a new challenge! In fact, we had reached this point without deciding how to create the podcast content or how to distribute it. After much discussion and trial and error within our team, we settled on the following: Processing of the recorded data Using Clipchamp (a Microsoft 365 family product) to create a video file with a focus on audio. Streaming method The file is uploaded to the in-house SharePoint, where the audience can access and play it on their PC or smartphone. The podcast release flow was finally decided as follows: Completing the content Conducting an internal review within the Manabi-no-Michi-no-Eki team Conducting a review by the interviewees Conducting a supervisor review (after the first review, if there are no major concerns, the supervisor will hand over the decision-making responsibility to us for subsequent reviews.) In-house announcement (= the release) We have established this simple flow, suitable for in-house content only. After that, once all the checks passed, we were able to finally release it inside the company! ![](/assets/blog/authors/ktc-taku-yajima/2024-12-02/started-a-podcast001.png =700x) Subsequent podcasts Having gained confidence from the release of our first podcast, we are now moving forward with the planning of our second and third podcasts. With each interview, we accumulated knowledge and became able to formalize our process to a certain extent. Formalization of the planning process Collecting information on in-house study sessions and information dissemination activities The Manabi-no-Michi-no-Eki team will approach the management team members to conduct interviews. Podcasting the interview results Internal announcement after reviews by the management team and interviewees Formalization of recording, editing, and distribution Interviews will be conducted via Zoom (since our members are dispersed across multiple locations). Each person's PC microphone and conference room microphone will be utilized. The sound quality will be shelved for the time being. Zoom recording data will be used as the audio source. The audio source data will be edited and processed using Clipchamp. After editing the audio source data, it will be saved in the company's internal storage (MS SharePoint). Listeners will access the content via MS Stream. Thanks to this formalized process, we have been able to smoothly operate the system, and up to now, we have successfully distributed 11 pieces of content. About the future As we continue our podcasting activities, we have developed a desire to expand in various ways. We want to interview people who are involved in various in-house activities beyond study group members. We also want to interview members of the management and have them share their candid opinions with the rest of the company. We want to expand our output beyond just the company. We, the Manabi-no-Michi-no-Eki team, will continue to actively share information in the coming years. Stay tuned!
アバター
Introduction I am Okapi, from the Quality Assurance Group. Since I often participate in projects as the primary person in charge of QA, in this article, I'd like to share how QA team participates in projects, communicates with the development team, and progresses in its testing at KINTO Technologies. Purpose of This Article When proceeding a project with a team that has never worked with QA before, there is often some trial and error involved as they try to understand what QA can contribute and how the process will unfold. To ensure smoother progress in such cases, I aim to raise awareness of QA. What is QA? QA is an acronym for "Quality Assurance." "Quality Assurance" is a broad term that involves ensuring quality in various aspects. We conduct testing from the user's perspective, by focusing on scenario testing based on actual user usage assumptions and verifying screen UI to prevent any inconvenience and ensure usability. The primary roles of the QA and development teams in testing are outlined in the table below. | Item | QA | Development | Remarks | | ---- | ---- | ---- | ---- | | Verify specifications against system requirements | ㅤ◎ㅤㅤ | ㅤ〇ㅤㅤ| QA ensures that functions and performance meet system requirements, based on user's perspective. | |Verify according to scenarios of user use | ㅤ◎ㅤㅤ | ㅤ△ㅤㅤ| Mainly verified by QA | | Other than the above | ㅤ△ㅤ ㅤ| ㅤ◎ㅤㅤ| When requested, the development team's external integration testing and QA will verify within feasible resource limits | Overview of QA testing Phase Summary Test Plan Phase Share overall project schedule and specification documents (system requirements and screen specifications) to allow QA to create a test plan detailing how the subsequent phases will proceed. Test Analysis Phase Based on specification documents (system requirements and screen specifications), create test perspectives that clarify the scope of testing (to be tested/not to be tested). Test Design Phase Create test cases (prerequisites, procedures, and expectations) based on the test perspectives. Test Implementation Phase Conduct tests based on the created test cases, report defects, and verify modifications. Points Requiring Communication with the Development Team Test Plan Phase Ensure alignment with the development team based on the test plan, which outlines the QA testing period, verification environment, person in charge of conducting QA testing, development team contact, and target devices/browsers. Test Analysis Phase Use JIRA or Confluence to ask questions about the information necessary to create test perspectives. For areas with differing understandings or multiple development team contacts, hold meetings to confirm and organize the specifications. Once the specifications are organized, create test perspectives that clearly define the test scope (to be tested/not to be tested) and confirm alignment with the development team. For the test scope, from a black box testing perspective, the target is on areas users will actually interact with (as deemed necessary by QA for the project), while excluding areas users do not access, such as system administration screens. However, since scenario testing involves verifying the sequence of flow, areas that intersect with the user-side testing, even if typically excluded, are included as test targets. *For regression testing, where quality is assured, the test scope is adjusted based on the shared schedule and resources. Test Design Phase The procedure for conducting tests will be reviewed, but test cases are created based on the agreed-upon test perspectives. Therefore, alignment with the development team is basically not conducted at this phase. Test Implementation Phase Issues (expected results differing from specifications), questions (missing or unclear specifications), and improvement requests (aligned with specifications but unclear to users) are reported in JIRA. Once addressed by the development team (fixes), the QA team conducts re-verification. Testing is considered complete when all test cases and the JIRA tickets are addressed, or when the remaining JIRA tickets are excluded from the current QA test scope. Once implementation is complete, we participate in the overall retrospective (e.g., KPT analysis) to discuss areas for improvement and apply them to future projects. Future Challenges Depending on the project or product, "how to organize the documents" may differ, so after aligning on specifications in JIRA or Confluence, QA organizes and documents the overall user-related system specifications and workflows before moving forward. However, depending on the system's scale, this can be time-consuming. By organizing documents and processes within the QA team and ensuring smooth communication with the development team, we aim to efficiently summarize specifications for accurate understanding. Conclusion The QA team is an independent organization, which may give the development team the impression that they are requesting tasks to QA or that QA is simply conducting tests. However, as part of KINTO Technologies, we see ourselves as partners working together to create high-quality systems. Therefore, we aim to continue fostering a collaborative relationship.
アバター
KINTOテクノロジーズで my route(iOS) を開発しているRyommです。 my routeアプリのライブラリ管理ツールがついに!CocoaPodsからSwift Package Manager(以下SPM)に移行しました! はじめに my routeではCocoaPodsを使用していました。 しかし2024年夏ごろ、CocoaPodsから以下のお知らせが発布されました。 https://blog.cocoapods.org/CocoaPods-Support-Plans/ CocoaPodsがメンテナンスモードに移行するというお知らせです。 使えなくなるわけではありませんし、セキュリティの問題などには対応してくれます。ただし、GitHub issueにおけるサポートや新機能の追加などは行われなくなります。 そうすると今後、新しいライブラリを入れたい時にCocoaPodsがサポートされなくなっていたり、CocoaPods自体に何か問題が起こっても直すことができない、といった場面が起こり得ます。 以前からmy routeではSPMへの移行の話は持ち上がっていましたが、このCocoaPodsのお知らせが追い風となり、満を持してSPMへの移行に着手しました。 Swift Package Managerとは SPMとは、Appleが提供する純正ライブラリ管理ツールです。 https://github.com/swiftlang/swift-package-manager iOSでライブラリ管理ツールといえば、他にはCocoaPodsやMintなどのサードパーティツールがあります。これらと最も大きく違うのは、Xcodeに統合されているという点です。 これにより、必要なタイミングで Package.resolved を更新するだけで、プロジェクトファイルを開いた時やビルド時に最新の指定バージョンのライブラリを取り込んでくれます。 バージョン更新をするたびにチームへ pod install を実行してくれ〜と声をかける必要がなくなるのです。 移行したよ ひたすら移していくのみですが、いくつか躓いたポイントや工夫点があるので紹介します。 指定したバージョンのライブラリが降ってこない 使用するバージョンを変更する際、Xcodeで「Reset Package Caches」や「Update to Latest Package Versions」を実行しても、Package.swiftで指定しているバージョンが降ってこない問題がありました。DerivedDataを消しても直りません。 これは、 ~/Library/Caches/org.swift.swiftpm や ~/Library/org.swift.swiftpm などに深淵のキャッシュが残っているためです。これらを消すと正しいバージョンのライブラリが降ってくるようになります。 テストの時だけビルドが通らない my routeは複数のスキーマを持っており、それぞれ依存するライブラリが少し異なります。 そのため以下のようにプロジェクト自体のPackage.swiftとは分離した構成になっています。 プロジェクトファイルと同じワークスペース配下にDependenciesパッケージを作成し、このパッケージ内で各スキーマと1対1になるproduct(フレームワーク)となるようにライブラリを管理しています。 これらのproductを右側のスクリーンショットのように、それぞれのターゲットの Frameworks, Libraries, and Embedded Content にて紐付けています。 しかしこの構成にしていると、 xcodebuild build は通りますが、 xcodebuild build-for-testing が通らないという問題が起こりました。XcodeのGUI上ではRunとTestに対応するコマンドです。 本来、パッケージのものはパッケージ内でテストします。 しかし上記の構成では、プロジェクト本体のメインのターゲットでテストを実行します。つまり、パッケージの外でテストしているということです。 ということはこれは...違法建築...なのですが、ゆくゆくは同じパッケージ内にテストを詰めるように直していくということで、一旦構成は維持したままテストできるようにします。 RunとTestではLinkerあたりの挙動が異なるらしいことが原因でした。 Runのビルド生成物を①、Testのビルド生成物を②とします。 ①にはDependenciesのSPMが含まれていますが、②には含まれていません。 そのため、MainTarget内のテストがDependenciesのライブラリに依存しているとビルドに失敗してしまいます。 簡単な解決策としては、MainTarget内のSPMに持たせるとRunでもTestでもビルドに含まれるため、そちらにテストで必要なライブラリを移します。 根本的に解決するには、先述のように1つのパッケージ内でテストが簡潔するようにすると良いでしょう。 SwiftLintやLicensePlistをSPMに乗せる SwiftLintやLicensePlistはプロジェクトのBuild Phaseに含めてビルド時に実行されるようにしたいため、workspaceから独立した場所に別のパッケージを作成します。 Project/ ├── Test.xcworkspace ├── Test.xcodeproj ├── Test/ │ └── ... ├── ... └── tools // <- this! ├── Package.swift └── Package.resolved 新しく作成したtoolsパッケージにSwiftLintやLicensePlistなどBuild Phaseに含めたいライブラリを入れます。 そして以下のようなシェルを用意し、初回だけ実行してローカルにライブラリを落としておきます。 SPM_PACKAGE_PATH=tools SPM_BUILD_PATH=tools/.build/release echo "swiftlint" && $SPM_BUILD_PATH/swiftlint --version || swift run -c release --package-path $SPM_PACKAGE_PATH swiftlint --version echo "license-plist" && $SPM_BUILD_PATH/license-plist --version || swift run -c release --package-path $SPM_PACKAGE_PATH license-plist --version すると tools/.build/release/swiftlint で実行可能になるので、これをBuild Phaseに入れます。 LicensePlistも同様です。 Bitriseなどから実行する際も、シェルを実行してからプロジェクトをビルドすると実行されることが確認できます。 おわりに 移行してから数ヶ月経ち、まだまだいくつかの問題もありますが、ライブラリの入れ替えは楽になったと思います。 またライブラリ起因の問題かどうか判断したい時に、アプリ版のPraygroundsでサクッと確認できることが個人的には楽に感じます。 SPMがプロジェクトで使えるようになったことで、SPMでしか提供されていないApple製ライブラリも使えるようにもなりました。 今後はこれらのライブラリも使ってより良い実装にしていきたいです。
アバター