You've been told your app needs to support German, French, and Japanese by Q3. Where do you start?
If you're like most developers, your first instinct is to wrap all your strings in a translation function and figure out the rest later. That works for the first language. By the third, you'll wish you'd thought about the architecture more carefully.
This guide covers the full software localization process from a developer's perspective: the engineering decisions, the tooling choices, the workflow setup, and the mistakes that cost teams weeks of rework.
Localization vs internationalization: know the difference
Internationalization (i18n) is the engineering work. It's preparing your codebase to support multiple languages: extracting strings, handling locale-aware formatting, supporting RTL layouts, and making your build system output per-locale bundles.
Localization (l10n) is the content work. It's the actual translation, cultural adaptation, locale-specific formatting, and market-specific adjustments.
i18n happens once (with ongoing maintenance). l10n happens for every new language. If your i18n foundation is weak, every new language is painful. Get the engineering right first.
Choosing an i18n framework
Your choice depends on your stack. Here's what works in 2026:
React / Next.js
react-i18next is the most popular choice. It's well-maintained, supports lazy loading of translation files, has pluralization and interpolation, and integrates with most TMS platforms. For Next.js specifically, next-intl provides tighter integration with Next.js routing and server components.
// react-i18next basic usage
import { useTranslation } from 'react-i18next';
function Greeting() {
const { t } = useTranslation();
return <h1>{t('greeting', { name: 'World' })}</h1>;
}
// en.json: { "greeting": "Hello, {{name}}!" }
// de.json: { "greeting": "Hallo, {{name}}!" }
For a deeper dive with code examples, check the JavaScript translation API tutorial.
Vue
vue-i18n is the standard. Similar feature set to react-i18next, well-integrated with Vue's reactivity system and single-file components.
Backend (Node.js, Python, Ruby)
For APIs and backend services, keep it simple. JSON or YAML files with key-value pairs, loaded at startup. Libraries like i18next (Node.js), gettext (Python), or i18n (Ruby) handle the loading and lookup. Backend localization is mostly for error messages, email templates, and API responses.
Mobile (iOS, Android)
Use the platform's built-in localization system. NSLocalizedString on iOS, strings.xml on Android. These are battle-tested and expected by app store reviewers. Don't fight the platform.
String extraction: do it right the first time
String extraction is the process of moving all user-facing text from your code into separate translation files. This is the most tedious part of localization, and where most shortcuts come back to bite you.
Rules for string extraction:
- Extract everything. Error messages, button labels, placeholder text, tooltip content, validation messages, email subjects, notification text, image alt text. If a user can see it, it needs to be in a translation file.
- Use descriptive keys.
auth.login.button.submitis better thanbutton_1. When a translator sees the key, they should understand the context. - Include context for translators. Many i18n formats support description fields. Use them. "Submit" as a button label translates differently than "submit" as a verb in a sentence.
- Handle plurals properly. English has 2 plural forms (1 item, 2 items). Arabic has 6. Polish has 4. Your i18n library should handle CLDR plural rules, not just "if count === 1".
- Don't concatenate strings.
t('greeting') + ' ' + namebreaks in languages with different word order. Use interpolation:t('greeting', { name }). - Don't embed HTML in translations.
t('bold_text')where the value is"Click <b>here</b>"forces translators to work around HTML tags. Use your i18n library's rich text support instead.
Continuous localization: the workflow that scales
Continuous localization means new strings get translated as part of your normal development workflow, not as a separate "localization phase" at the end. Here's how it works:
- Developer adds new strings to the source language file (usually
en.json) - Push to repository triggers a sync with your TMS (Crowdin, Lokalise, Phrase)
- TMS detects new strings and optionally pre-translates them using a machine translation API
- Translators review and approve (or the pre-translation is used as-is for lower-priority content)
- Approved translations sync back to your repository via pull request
- CI/CD picks up the changes and deploys
For the machine translation step, you want an API that produces output good enough to ship for non-critical strings, and good enough to speed up human translators for critical strings. Context-aware translation APIs are well-suited for this because they handle register, idioms, and technical terminology better than traditional NMT. Most popular TMS platforms support custom MT engine integration via API.
Translation API integration patterns
There are three common patterns for integrating a translation API into your localization workflow:
Pattern 1: TMS-managed translation
Your TMS (Crowdin, Lokalise) calls the translation API directly. You configure the API key in the TMS settings, and it handles pre-translation of new strings. This is the simplest setup.
Langbly integrates with Crowdin as a machine translation engine. See the API documentation for setup instructions.
Pattern 2: CI/CD pipeline translation
A script in your CI/CD pipeline detects untranslated strings and calls the translation API before building. Good for teams that want full control over the process without a TMS.
// Simplified CI translation script
const langbly = require('langbly');
const client = new langbly.Client({ apiKey: process.env.LANGBLY_API_KEY });
async function translateMissing(sourceFile, targetLocale) {
const source = JSON.parse(fs.readFileSync(`locales/en.json`));
const target = JSON.parse(fs.readFileSync(`locales/${targetLocale}.json`) || '{}');
for (const [key, value] of Object.entries(source)) {
if (!target[key]) {
const result = await client.translate(value, { target: targetLocale });
target[key] = result.translatedText;
}
}
fs.writeFileSync(`locales/${targetLocale}.json`, JSON.stringify(target, null, 2));
}
Pattern 3: Runtime translation
Translate content on-the-fly at request time. This works for user-generated content, dynamic data, and scenarios where you can't pre-translate everything. Use caching aggressively. Langbly includes 7-day response caching built-in, so repeated translations of the same content don't cost extra.
Handling the hard parts
Right-to-left (RTL) languages
Arabic, Hebrew, and Farsi read right-to-left. This affects your entire layout, not just text direction. Navigation moves to the right side. Bullet points align right. Progress bars fill from right to left. Icons that imply direction (arrows, "back" buttons) need to be mirrored.
CSS logical properties make this manageable: use margin-inline-start instead of margin-left, padding-inline-end instead of padding-right. Set dir="rtl" on your HTML element based on locale, and most layouts adapt automatically.
Text expansion and contraction
German text is typically 30% longer than English. Chinese and Japanese can be 30% shorter. Your UI needs to handle both extremes without breaking.
Practical fixes: use flexible containers (flexbox, grid), avoid fixed widths on text elements, test with German (longest common language) early, and design buttons with enough padding for longer labels.
CJK (Chinese, Japanese, Korean) considerations
CJK scripts don't use spaces between words. This affects word-wrapping, search functionality, and text truncation. Make sure your CSS includes word-break: break-word for CJK content. Line breaks can occur between any characters, not just at spaces.
Font rendering also differs. CJK characters need larger font sizes to be legible. A font size that works for Latin text may be too small for Japanese.
Plural rules
English has two plural forms: singular and plural. Other languages are more complex:
- Arabic: 6 forms (zero, one, two, few, many, other)
- Polish: 4 forms (one, few, many, other)
- Japanese: 1 form (no plural distinction)
- Russian: 3 forms (one, few, many)
Use ICU MessageFormat or your i18n library's plural handling. Never build plural logic manually.
Testing localized software
Localization bugs are subtle. They don't crash your app, they just make it look unprofessional. A structured testing approach catches most issues:
- Pseudo-localization: Replace all strings with accented characters (e.g., "[Ḩḗŀŀǿ Ẇǿřŀḓ]") to spot hardcoded strings and layout issues without actual translation.
- Screenshot testing: Render every screen in every locale and visually compare. Tools like Percy or Chromatic automate this.
- Automated checks: Missing translations, mismatched placeholders, strings that exceed length limits, untranslated strings in production builds.
- Native speaker review: Have someone who speaks the language naturally use the product. They'll catch unnatural phrasing, wrong register, and contextual errors that automated tools miss.
What to use for the translation itself
Your translation approach should match the content importance:
- Machine translation (API): Good for UI strings, user-generated content, internal tools. Cost: $2-4 per million characters with Langbly, $20-25 with Google/DeepL. See our Google Translate pricing breakdown and DeepL pricing guide for detailed cost comparisons.
- Machine translation + human review: Good for product UI, help center, documentation. Adds $0.02-0.05 per word for the review pass.
- Professional human translation: Required for legal content, marketing copy, and content where tone and nuance carry significant weight. Costs $0.08-0.20 per word.
Most software projects use a mix: machine translation for 80% of strings, human review for the remaining 20% that are customer-facing or high-stakes.
Bottom line
Software localization is engineering work first, translation work second. Get your i18n architecture right, set up a continuous workflow that doesn't slow down feature development, and use the right level of translation quality for each content type.
The translation part is the most commoditized piece of the puzzle. Translation APIs deliver production-quality output for most UI strings at a fraction of a cent per string. The hard part is everything else: the layout adaptation, the locale formatting, the plural rules, the RTL support, and the ongoing maintenance as your product evolves.
Start with one language, learn from the experience, and scale from there. See our localization strategy guide for the broader planning framework.